Test Report: Hyper-V_Windows 18014

                    
                      3348142c74a021d65da8da3e7947dbd5f1375456:2024-01-22:32819
                    
                

Test fail (17/156)

x
+
TestAddons/parallel/Registry (73.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 36.4857ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zb8j4" [054090e5-9bcd-4965-a1b7-952cb9b67a12] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0192454s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6nmtv" [9e5cec3e-0715-4509-a854-d5c5baf6981f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0154818s
addons_test.go:340: (dbg) Run:  kubectl --context addons-760600 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-760600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-760600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.802124s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 ip: (2.8408419s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0122 10:43:39.151476   11596 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-760600 ip"
2024/01/22 10:43:41 [DEBUG] GET http://172.23.176.254:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable registry --alsologtostderr -v=1: (16.8312416s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-760600 -n addons-760600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-760600 -n addons-760600: (13.2891563s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 logs -n 25: (10.255064s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-879800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-879800                                                                     | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | -o=json --download-only                                                                     | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-789300                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-789300                                                                     | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | -o=json --download-only                                                                     | download-only-792600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-792600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-792600                                                                     | download-only-792600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-879800                                                                     | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-789300                                                                     | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-792600                                                                     | download-only-792600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-350600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | binary-mirror-350600                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:54978                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-350600                                                                     | binary-mirror-350600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| addons  | enable dashboard -p                                                                         | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | addons-760600                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | addons-760600                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-760600 --wait=true                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-760600 addons                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-760600 ssh cat                                                                       | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
	|         | /opt/local-path-provisioner/pvc-957c413a-deb5-4049-8cba-7fb764975890_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-760600 ip                                                                            | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
	| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:44 UTC |
	|         | addons-760600                                                                               |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC |                     |
	|         | -p addons-760600                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:36:58
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:36:58.407467   11012 out.go:296] Setting OutFile to fd 628 ...
	I0122 10:36:58.408716   11012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:58.408716   11012 out.go:309] Setting ErrFile to fd 808...
	I0122 10:36:58.408716   11012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:58.431885   11012 out.go:303] Setting JSON to false
	I0122 10:36:58.434412   11012 start.go:128] hostinfo: {"hostname":"minikube7","uptime":39187,"bootTime":1705880631,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:36:58.434412   11012 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:36:58.436471   11012 out.go:177] * [addons-760600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:36:58.436801   11012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:36:58.436801   11012 notify.go:220] Checking for updates...
	I0122 10:36:58.438271   11012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:36:58.439204   11012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:36:58.440033   11012 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:36:58.440448   11012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:36:58.443093   11012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:37:03.901766   11012 out.go:177] * Using the hyperv driver based on user configuration
	I0122 10:37:03.902811   11012 start.go:298] selected driver: hyperv
	I0122 10:37:03.902811   11012 start.go:902] validating driver "hyperv" against <nil>
	I0122 10:37:03.902889   11012 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:37:03.954951   11012 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 10:37:03.956514   11012 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 10:37:03.956586   11012 cni.go:84] Creating CNI manager for ""
	I0122 10:37:03.956586   11012 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:37:03.956658   11012 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 10:37:03.956658   11012 start_flags.go:321] config:
	{Name:addons-760600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-760600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:37:03.957033   11012 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:37:03.958364   11012 out.go:177] * Starting control plane node addons-760600 in cluster addons-760600
	I0122 10:37:03.958364   11012 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:37:03.959029   11012 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:37:03.959119   11012 cache.go:56] Caching tarball of preloaded images
	I0122 10:37:03.959373   11012 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:37:03.959590   11012 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:37:03.959850   11012 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\config.json ...
	I0122 10:37:03.959850   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\config.json: {Name:mk6fbae78412aab5406de987dfa7ae0171f07af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:37:03.961080   11012 start.go:365] acquiring machines lock for addons-760600: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:37:03.961776   11012 start.go:369] acquired machines lock for "addons-760600" in 0s
	I0122 10:37:03.961956   11012 start.go:93] Provisioning new machine with config: &{Name:addons-760600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-760600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 10:37:03.962007   11012 start.go:125] createHost starting for "" (driver="hyperv")
	I0122 10:37:03.962764   11012 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0122 10:37:03.962764   11012 start.go:159] libmachine.API.Create for "addons-760600" (driver="hyperv")
	I0122 10:37:03.962764   11012 client.go:168] LocalClient.Create starting
	I0122 10:37:03.964009   11012 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0122 10:37:04.189462   11012 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0122 10:37:04.286891   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0122 10:37:06.403055   11012 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0122 10:37:06.403188   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:06.403188   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0122 10:37:08.157800   11012 main.go:141] libmachine: [stdout =====>] : False
	
	I0122 10:37:08.157951   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:08.158028   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 10:37:09.655547   11012 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 10:37:09.655667   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:09.655667   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 10:37:13.262038   11012 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 10:37:13.262366   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:13.265430   11012 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0122 10:37:13.747013   11012 main.go:141] libmachine: Creating SSH key...
	I0122 10:37:13.935933   11012 main.go:141] libmachine: Creating VM...
	I0122 10:37:13.935933   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 10:37:16.774544   11012 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 10:37:16.774987   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:16.774987   11012 main.go:141] libmachine: Using switch "Default Switch"
	I0122 10:37:16.775175   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 10:37:18.541090   11012 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 10:37:18.541090   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:18.541090   11012 main.go:141] libmachine: Creating VHD
	I0122 10:37:18.541090   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0122 10:37:22.347926   11012 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0B9AD9E1-B0A4-4E4B-A9BE-F3A4B1AF074E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0122 10:37:22.347926   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:22.348023   11012 main.go:141] libmachine: Writing magic tar header
	I0122 10:37:22.348202   11012 main.go:141] libmachine: Writing SSH key tar header
	I0122 10:37:22.357106   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0122 10:37:25.524040   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:25.524040   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:25.524107   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\disk.vhd' -SizeBytes 20000MB
	I0122 10:37:28.037236   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:28.037236   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:28.037236   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-760600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0122 10:37:31.578092   11012 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-760600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0122 10:37:31.578092   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:31.578092   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-760600 -DynamicMemoryEnabled $false
	I0122 10:37:33.792961   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:33.792961   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:33.792961   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-760600 -Count 2
	I0122 10:37:35.960369   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:35.960369   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:35.960369   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-760600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\boot2docker.iso'
	I0122 10:37:38.503579   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:38.503920   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:38.503920   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-760600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\disk.vhd'
	I0122 10:37:41.099627   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:41.099627   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:41.099721   11012 main.go:141] libmachine: Starting VM...
	I0122 10:37:41.099721   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-760600
	I0122 10:37:43.975756   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:43.975815   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:43.975815   11012 main.go:141] libmachine: Waiting for host to start...
	I0122 10:37:43.975815   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:37:46.258365   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:37:46.258538   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:46.258618   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:37:48.783458   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:48.783507   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:49.785402   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:37:52.057841   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:37:52.057841   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:52.057928   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:37:54.592434   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:37:54.592434   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:55.597949   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:37:57.779395   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:37:57.779431   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:37:57.779556   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:00.309567   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:38:00.309873   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:01.321249   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:03.532684   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:03.533002   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:03.533056   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:06.067759   11012 main.go:141] libmachine: [stdout =====>] : 
	I0122 10:38:06.067938   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:07.068579   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:09.304218   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:09.304218   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:09.304310   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:11.928116   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:11.928324   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:11.928324   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:14.108346   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:14.108801   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:14.108801   11012 machine.go:88] provisioning docker machine ...
	I0122 10:38:14.108968   11012 buildroot.go:166] provisioning hostname "addons-760600"
	I0122 10:38:14.109047   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:16.258001   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:16.258198   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:16.258198   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:18.812116   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:18.812116   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:18.817889   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:38:18.827889   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:38:18.827889   11012 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-760600 && echo "addons-760600" | sudo tee /etc/hostname
	I0122 10:38:18.997052   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760600
	
	I0122 10:38:18.997283   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:21.166056   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:21.166056   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:21.166314   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:23.710335   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:23.710335   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:23.715747   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:38:23.716581   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:38:23.716581   11012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-760600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-760600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-760600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:38:23.872147   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:38:23.872733   11012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:38:23.872850   11012 buildroot.go:174] setting up certificates
	I0122 10:38:23.872901   11012 provision.go:83] configureAuth start
	I0122 10:38:23.872942   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:25.970019   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:25.970019   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:25.970019   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:28.515181   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:28.515181   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:28.515181   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:30.636358   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:30.636605   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:30.636605   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:33.157326   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:33.157326   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:33.157326   11012 provision.go:138] copyHostCerts
	I0122 10:38:33.157326   11012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:38:33.159368   11012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:38:33.160847   11012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:38:33.161742   11012 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-760600 san=[172.23.176.254 172.23.176.254 localhost 127.0.0.1 minikube addons-760600]
	I0122 10:38:33.433155   11012 provision.go:172] copyRemoteCerts
	I0122 10:38:33.446781   11012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:38:33.446781   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:35.606100   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:35.606100   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:35.606100   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:38.156271   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:38.156271   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:38.156496   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:38:38.268749   11012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8219468s)
	I0122 10:38:38.268749   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:38:38.310050   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0122 10:38:38.351054   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:38:38.389816   11012 provision.go:86] duration metric: configureAuth took 14.5167587s
	I0122 10:38:38.389816   11012 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:38:38.390623   11012 config.go:182] Loaded profile config "addons-760600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:38:38.390704   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:40.514230   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:40.514263   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:40.514263   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:43.045549   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:43.045549   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:43.052790   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:38:43.053583   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:38:43.053583   11012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:38:43.209083   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:38:43.209083   11012 buildroot.go:70] root file system type: tmpfs
	I0122 10:38:43.209083   11012 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:38:43.209083   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:45.308747   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:45.308747   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:45.308747   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:47.802657   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:47.803029   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:47.806757   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:38:47.807530   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:38:47.808120   11012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:38:47.972352   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:38:47.972456   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:50.080471   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:50.080640   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:50.080640   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:52.639432   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:52.639629   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:52.646391   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:38:52.647167   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:38:52.647167   11012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:38:53.631914   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 10:38:53.632016   11012 machine.go:91] provisioned docker machine in 39.5230411s
	I0122 10:38:53.632016   11012 client.go:171] LocalClient.Create took 1m49.6687725s
	I0122 10:38:53.632077   11012 start.go:167] duration metric: libmachine.API.Create for "addons-760600" took 1m49.6687725s
	I0122 10:38:53.632150   11012 start.go:300] post-start starting for "addons-760600" (driver="hyperv")
	I0122 10:38:53.632207   11012 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:38:53.645379   11012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:38:53.645379   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:38:55.771240   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:38:55.771388   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:55.771516   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:38:58.317905   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:38:58.317905   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:38:58.318274   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:38:58.429978   11012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7845785s)
	I0122 10:38:58.444881   11012 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:38:58.451727   11012 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:38:58.451727   11012 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:38:58.451727   11012 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:38:58.452465   11012 start.go:303] post-start completed in 4.8202942s
	I0122 10:38:58.455277   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:00.598122   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:00.598314   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:00.598314   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:03.133154   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:03.133154   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:03.133408   11012 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\config.json ...
	I0122 10:39:03.136739   11012 start.go:128] duration metric: createHost completed in 1m59.1741806s
	I0122 10:39:03.136775   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:05.207384   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:05.207384   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:05.207458   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:07.704299   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:07.704299   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:07.708294   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:39:07.709007   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:39:07.709007   11012 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 10:39:07.851281   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705919947.853293431
	
	I0122 10:39:07.851406   11012 fix.go:206] guest clock: 1705919947.853293431
	I0122 10:39:07.851406   11012 fix.go:219] Guest: 2024-01-22 10:39:07.853293431 +0000 UTC Remote: 2024-01-22 10:39:03.1367756 +0000 UTC m=+124.903168901 (delta=4.716517831s)
	I0122 10:39:07.851545   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:10.028732   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:10.028817   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:10.028895   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:12.594257   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:12.594351   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:12.599575   11012 main.go:141] libmachine: Using SSH client type: native
	I0122 10:39:12.600284   11012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.176.254 22 <nil> <nil>}
	I0122 10:39:12.600284   11012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705919947
	I0122 10:39:12.749700   11012 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:39:07 UTC 2024
	
	I0122 10:39:12.749700   11012 fix.go:226] clock set: Mon Jan 22 10:39:07 UTC 2024
	 (err=<nil>)
	I0122 10:39:12.749700   11012 start.go:83] releasing machines lock for "addons-760600", held for 2m8.7873602s
	I0122 10:39:12.750234   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:14.890713   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:14.890713   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:14.890843   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:17.425237   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:17.425237   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:17.429723   11012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:39:17.430412   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:17.442096   11012 ssh_runner.go:195] Run: cat /version.json
	I0122 10:39:17.442096   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:39:19.622700   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:19.622848   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:19.622848   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:19.647726   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:39:19.647914   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:19.647914   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:39:22.274518   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:22.274862   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:22.275213   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:39:22.296455   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:39:22.296455   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:39:22.297165   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:39:22.488128   11012 ssh_runner.go:235] Completed: cat /version.json: (5.0455977s)
	I0122 10:39:22.488128   11012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0578289s)
	I0122 10:39:22.502389   11012 ssh_runner.go:195] Run: systemctl --version
	I0122 10:39:22.524593   11012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 10:39:22.533699   11012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:39:22.547087   11012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:39:22.573281   11012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 10:39:22.573281   11012 start.go:475] detecting cgroup driver to use...
	I0122 10:39:22.573281   11012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:39:22.622302   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:39:22.652416   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:39:22.671643   11012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:39:22.683445   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:39:22.712744   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:39:22.740311   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:39:22.769228   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:39:22.798495   11012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:39:22.825588   11012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:39:22.855315   11012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:39:22.884020   11012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:39:22.912329   11012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:39:23.097155   11012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:39:23.122570   11012 start.go:475] detecting cgroup driver to use...
	I0122 10:39:23.135696   11012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:39:23.169079   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:39:23.204520   11012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:39:23.245095   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:39:23.277317   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:39:23.311935   11012 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 10:39:23.369175   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:39:23.391789   11012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:39:23.435391   11012 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:39:23.453588   11012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:39:23.468631   11012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:39:23.510877   11012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:39:23.680767   11012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:39:23.827093   11012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:39:23.827457   11012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:39:23.874149   11012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:39:24.042077   11012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 10:39:25.542076   11012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4999929s)
	I0122 10:39:25.557832   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 10:39:25.592764   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 10:39:25.628625   11012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 10:39:25.806288   11012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 10:39:25.990043   11012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:39:26.179300   11012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 10:39:26.220412   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 10:39:26.253108   11012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:39:26.415344   11012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 10:39:26.528224   11012 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 10:39:26.543742   11012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 10:39:26.551366   11012 start.go:543] Will wait 60s for crictl version
	I0122 10:39:26.565646   11012 ssh_runner.go:195] Run: which crictl
	I0122 10:39:26.581601   11012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 10:39:26.656418   11012 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 10:39:26.667087   11012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 10:39:26.714317   11012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 10:39:26.745287   11012 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 10:39:26.745617   11012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 10:39:26.750526   11012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 10:39:26.750526   11012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 10:39:26.750526   11012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 10:39:26.750526   11012 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 10:39:26.753799   11012 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 10:39:26.753799   11012 ip.go:210] interface addr: 172.23.176.1/20
	I0122 10:39:26.765952   11012 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 10:39:26.770856   11012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 10:39:26.792746   11012 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:39:26.803001   11012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 10:39:26.829118   11012 docker.go:685] Got preloaded images: 
	I0122 10:39:26.829205   11012 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0122 10:39:26.846047   11012 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0122 10:39:26.875577   11012 ssh_runner.go:195] Run: which lz4
	I0122 10:39:26.891669   11012 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0122 10:39:26.897303   11012 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 10:39:26.897507   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0122 10:39:29.932250   11012 docker.go:649] Took 3.051440 seconds to copy over tarball
	I0122 10:39:29.947708   11012 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 10:39:36.276261   11012 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.3285251s)
	I0122 10:39:36.276261   11012 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 10:39:36.346210   11012 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0122 10:39:36.361551   11012 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0122 10:39:36.406308   11012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:39:36.571194   11012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 10:39:42.364538   11012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.793246s)
	I0122 10:39:42.375877   11012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 10:39:42.413536   11012 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0122 10:39:42.413651   11012 cache_images.go:84] Images are preloaded, skipping loading
	I0122 10:39:42.424007   11012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 10:39:42.461009   11012 cni.go:84] Creating CNI manager for ""
	I0122 10:39:42.461330   11012 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:39:42.461407   11012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 10:39:42.461449   11012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.176.254 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-760600 NodeName:addons-760600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.176.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.176.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 10:39:42.461449   11012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.176.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-760600"
	  kubeletExtraArgs:
	    node-ip: 172.23.176.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.176.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 10:39:42.461449   11012 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-760600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.176.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-760600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 10:39:42.473992   11012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 10:39:42.489568   11012 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 10:39:42.505477   11012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 10:39:42.524921   11012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0122 10:39:42.552123   11012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 10:39:42.578010   11012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0122 10:39:42.620670   11012 ssh_runner.go:195] Run: grep 172.23.176.254	control-plane.minikube.internal$ /etc/hosts
	I0122 10:39:42.625436   11012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.176.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 10:39:42.644963   11012 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600 for IP: 172.23.176.254
	I0122 10:39:42.645080   11012 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:42.645684   11012 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 10:39:42.800928   11012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I0122 10:39:42.800928   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:42.803005   11012 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I0122 10:39:42.803005   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:42.803991   11012 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 10:39:43.090991   11012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0122 10:39:43.090991   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:43.092441   11012 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I0122 10:39:43.092441   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:43.094375   11012 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.key
	I0122 10:39:43.094375   11012 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt with IP's: []
	I0122 10:39:43.416350   11012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt ...
	I0122 10:39:43.460404   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: {Name:mk6bddbaf187f3d970e0d5e558d5317a48d0832c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:43.461745   11012 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.key ...
	I0122 10:39:43.461745   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.key: {Name:mk8a6b1babb979a0c278dcc45dab7f1d2331f4c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:43.463221   11012 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key.390162df
	I0122 10:39:43.463221   11012 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt.390162df with IP's: [172.23.176.254 10.96.0.1 127.0.0.1 10.0.0.1]
	I0122 10:39:44.049508   11012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt.390162df ...
	I0122 10:39:44.049508   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt.390162df: {Name:mk6fdcf9f9bc30c045c3b4aa7c8f14a4b53aa2b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:44.051827   11012 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key.390162df ...
	I0122 10:39:44.051827   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key.390162df: {Name:mk562ff466f3d6b7b395024a6856ef7b30b64e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:44.052431   11012 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt.390162df -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt
	I0122 10:39:44.065398   11012 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key.390162df -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key
	I0122 10:39:44.067896   11012 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.key
	I0122 10:39:44.068510   11012 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.crt with IP's: []
	I0122 10:39:44.305683   11012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.crt ...
	I0122 10:39:44.305683   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.crt: {Name:mka82e1d8da6760273d2817dfe4cd2cc444349d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:44.307488   11012 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.key ...
	I0122 10:39:44.307488   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.key: {Name:mk58db99b8bf23afa056c591438bfdf4ae57065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:39:44.324990   11012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 10:39:44.326163   11012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 10:39:44.326351   11012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 10:39:44.326570   11012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 10:39:44.327872   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0122 10:39:44.367039   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 10:39:44.411393   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 10:39:44.451055   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 10:39:44.492026   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 10:39:44.535383   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 10:39:44.576614   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 10:39:44.619268   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 10:39:44.662240   11012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 10:39:44.700565   11012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 10:39:44.741878   11012 ssh_runner.go:195] Run: openssl version
	I0122 10:39:44.760766   11012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 10:39:44.790697   11012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 10:39:44.796615   11012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 10:39:44.808956   11012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 10:39:44.831356   11012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 10:39:44.859528   11012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 10:39:44.864836   11012 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 10:39:44.864906   11012 kubeadm.go:404] StartCluster: {Name:addons-760600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-760600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.176.254 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:39:44.874413   11012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 10:39:44.909516   11012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 10:39:44.941812   11012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 10:39:44.967298   11012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 10:39:44.981117   11012 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 10:39:44.981117   11012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 10:39:45.261764   11012 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 10:39:59.117339   11012 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0122 10:39:59.117529   11012 kubeadm.go:322] [preflight] Running pre-flight checks
	I0122 10:39:59.117679   11012 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 10:39:59.117994   11012 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 10:39:59.118276   11012 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 10:39:59.118355   11012 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 10:39:59.119583   11012 out.go:204]   - Generating certificates and keys ...
	I0122 10:39:59.119793   11012 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0122 10:39:59.119967   11012 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0122 10:39:59.120173   11012 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 10:39:59.120325   11012 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0122 10:39:59.120470   11012 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0122 10:39:59.120690   11012 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0122 10:39:59.120806   11012 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0122 10:39:59.121048   11012 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-760600 localhost] and IPs [172.23.176.254 127.0.0.1 ::1]
	I0122 10:39:59.121281   11012 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0122 10:39:59.121544   11012 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-760600 localhost] and IPs [172.23.176.254 127.0.0.1 ::1]
	I0122 10:39:59.121772   11012 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 10:39:59.121913   11012 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 10:39:59.122064   11012 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0122 10:39:59.122247   11012 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 10:39:59.122382   11012 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 10:39:59.122504   11012 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 10:39:59.122622   11012 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 10:39:59.122819   11012 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 10:39:59.122977   11012 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 10:39:59.123184   11012 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 10:39:59.123912   11012 out.go:204]   - Booting up control plane ...
	I0122 10:39:59.124218   11012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 10:39:59.124425   11012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 10:39:59.124595   11012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 10:39:59.124933   11012 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 10:39:59.125150   11012 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 10:39:59.125306   11012 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0122 10:39:59.125765   11012 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 10:39:59.125974   11012 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004192 seconds
	I0122 10:39:59.126192   11012 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 10:39:59.126565   11012 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 10:39:59.126767   11012 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 10:39:59.127207   11012 kubeadm.go:322] [mark-control-plane] Marking the node addons-760600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 10:39:59.127247   11012 kubeadm.go:322] [bootstrap-token] Using token: fxi7ir.9ryi1ee7acnizhsq
	I0122 10:39:59.128112   11012 out.go:204]   - Configuring RBAC rules ...
	I0122 10:39:59.128419   11012 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 10:39:59.128491   11012 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 10:39:59.128491   11012 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 10:39:59.129288   11012 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 10:39:59.129654   11012 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 10:39:59.129863   11012 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 10:39:59.130150   11012 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 10:39:59.130215   11012 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0122 10:39:59.130283   11012 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0122 10:39:59.130418   11012 kubeadm.go:322] 
	I0122 10:39:59.130561   11012 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0122 10:39:59.130625   11012 kubeadm.go:322] 
	I0122 10:39:59.130939   11012 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0122 10:39:59.130939   11012 kubeadm.go:322] 
	I0122 10:39:59.131015   11012 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0122 10:39:59.131151   11012 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 10:39:59.131299   11012 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 10:39:59.131366   11012 kubeadm.go:322] 
	I0122 10:39:59.131503   11012 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0122 10:39:59.131573   11012 kubeadm.go:322] 
	I0122 10:39:59.131723   11012 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 10:39:59.131792   11012 kubeadm.go:322] 
	I0122 10:39:59.131936   11012 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0122 10:39:59.132171   11012 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 10:39:59.132231   11012 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 10:39:59.132333   11012 kubeadm.go:322] 
	I0122 10:39:59.132452   11012 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 10:39:59.132611   11012 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0122 10:39:59.132611   11012 kubeadm.go:322] 
	I0122 10:39:59.132611   11012 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fxi7ir.9ryi1ee7acnizhsq \
	I0122 10:39:59.133153   11012 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 \
	I0122 10:39:59.133241   11012 kubeadm.go:322] 	--control-plane 
	I0122 10:39:59.133241   11012 kubeadm.go:322] 
	I0122 10:39:59.133324   11012 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0122 10:39:59.133324   11012 kubeadm.go:322] 
	I0122 10:39:59.133324   11012 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fxi7ir.9ryi1ee7acnizhsq \
	I0122 10:39:59.133888   11012 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 10:39:59.133888   11012 cni.go:84] Creating CNI manager for ""
	I0122 10:39:59.133981   11012 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:39:59.134771   11012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 10:39:59.147218   11012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 10:39:59.167036   11012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0122 10:39:59.192757   11012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 10:39:59.209012   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=addons-760600 minikube.k8s.io/updated_at=2024_01_22T10_39_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:39:59.209012   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:39:59.253043   11012 ops.go:34] apiserver oom_adj: -16
	I0122 10:39:59.663120   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:00.170664   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:00.659837   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:01.166587   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:01.663688   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:02.169653   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:02.670196   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:03.172175   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:03.661100   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:04.160697   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:04.662358   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:05.169898   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:05.673333   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:06.159691   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:06.668908   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:07.172589   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:07.671365   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:08.162103   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:08.663720   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:09.170267   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:09.673523   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:10.162721   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:10.665000   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:11.171104   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:11.662533   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:12.168356   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:12.661845   11012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 10:40:12.939179   11012 kubeadm.go:1088] duration metric: took 13.7463619s to wait for elevateKubeSystemPrivileges.
	I0122 10:40:12.939244   11012 kubeadm.go:406] StartCluster complete in 28.0742145s
	I0122 10:40:12.939370   11012 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:40:12.939370   11012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:40:12.940264   11012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 10:40:12.941972   11012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 10:40:12.941972   11012 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0122 10:40:12.941972   11012 addons.go:69] Setting cloud-spanner=true in profile "addons-760600"
	I0122 10:40:12.941972   11012 addons.go:69] Setting helm-tiller=true in profile "addons-760600"
	I0122 10:40:12.941972   11012 addons.go:234] Setting addon cloud-spanner=true in "addons-760600"
	I0122 10:40:12.941972   11012 addons.go:69] Setting metrics-server=true in profile "addons-760600"
	I0122 10:40:12.941972   11012 addons.go:234] Setting addon metrics-server=true in "addons-760600"
	I0122 10:40:12.941972   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:69] Setting storage-provisioner=true in profile "addons-760600"
	I0122 10:40:12.941972   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:234] Setting addon storage-provisioner=true in "addons-760600"
	I0122 10:40:12.942595   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:234] Setting addon helm-tiller=true in "addons-760600"
	I0122 10:40:12.942749   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:69] Setting registry=true in profile "addons-760600"
	I0122 10:40:12.942749   11012 addons.go:234] Setting addon registry=true in "addons-760600"
	I0122 10:40:12.942749   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-760600"
	I0122 10:40:12.941972   11012 addons.go:69] Setting default-storageclass=true in profile "addons-760600"
	I0122 10:40:12.942749   11012 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-760600"
	I0122 10:40:12.942749   11012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-760600"
	I0122 10:40:12.942749   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:69] Setting gcp-auth=true in profile "addons-760600"
	I0122 10:40:12.943311   11012 mustload.go:65] Loading cluster: addons-760600
	I0122 10:40:12.941972   11012 addons.go:69] Setting ingress=true in profile "addons-760600"
	I0122 10:40:12.943912   11012 addons.go:234] Setting addon ingress=true in "addons-760600"
	I0122 10:40:12.944040   11012 config.go:182] Loaded profile config "addons-760600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:40:12.941972   11012 addons.go:69] Setting ingress-dns=true in profile "addons-760600"
	I0122 10:40:12.944175   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.944175   11012 addons.go:234] Setting addon ingress-dns=true in "addons-760600"
	I0122 10:40:12.944388   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.941972   11012 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-760600"
	I0122 10:40:12.944834   11012 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-760600"
	I0122 10:40:12.944834   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.941972   11012 addons.go:69] Setting volumesnapshots=true in profile "addons-760600"
	I0122 10:40:12.945070   11012 addons.go:234] Setting addon volumesnapshots=true in "addons-760600"
	I0122 10:40:12.945070   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.941972   11012 addons.go:69] Setting yakd=true in profile "addons-760600"
	I0122 10:40:12.945238   11012 addons.go:234] Setting addon yakd=true in "addons-760600"
	I0122 10:40:12.945238   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.945410   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.942749   11012 config.go:182] Loaded profile config "addons-760600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:40:12.941972   11012 addons.go:69] Setting inspektor-gadget=true in profile "addons-760600"
	I0122 10:40:12.945874   11012 addons.go:234] Setting addon inspektor-gadget=true in "addons-760600"
	I0122 10:40:12.945993   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.944766   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.941972   11012 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-760600"
	I0122 10:40:12.946255   11012 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-760600"
	I0122 10:40:12.946477   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:12.946798   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.947042   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.948309   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.948836   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.948949   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.950442   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:12.951146   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:13.716473   11012 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-760600" context rescaled to 1 replicas
	I0122 10:40:13.716473   11012 start.go:223] Will wait 6m0s for node &{Name: IP:172.23.176.254 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 10:40:13.721703   11012 out.go:177] * Verifying Kubernetes components...
	I0122 10:40:13.756557   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 10:40:14.728920   11012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.7869401s)
	I0122 10:40:14.729915   11012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 10:40:14.732923   11012 node_ready.go:35] waiting up to 6m0s for node "addons-760600" to be "Ready" ...
	I0122 10:40:14.894101   11012 node_ready.go:49] node "addons-760600" has status "Ready":"True"
	I0122 10:40:14.894101   11012 node_ready.go:38] duration metric: took 161.1776ms waiting for node "addons-760600" to be "Ready" ...
	I0122 10:40:14.894101   11012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 10:40:14.969107   11012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:17.133141   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:19.182856   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:19.451537   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.451537   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.452065   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.452065   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.453422   11012 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0122 10:40:19.454934   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0122 10:40:19.456825   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0122 10:40:19.456825   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0122 10:40:19.456825   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.454670   11012 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 10:40:19.456964   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 10:40:19.456964   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.466114   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.466114   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.468286   11012 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0122 10:40:19.476963   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.477981   11012 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0122 10:40:19.477981   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0122 10:40:19.477981   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.477981   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.478975   11012 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0122 10:40:19.480158   11012 out.go:177]   - Using image docker.io/registry:2.8.3
	I0122 10:40:19.481299   11012 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0122 10:40:19.481299   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0122 10:40:19.481299   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.482972   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.482972   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.483957   11012 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0122 10:40:19.484981   11012 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0122 10:40:19.484981   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0122 10:40:19.484981   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.716405   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.716405   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.731407   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0122 10:40:19.748404   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0122 10:40:19.750403   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0122 10:40:19.762449   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0122 10:40:19.763407   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0122 10:40:19.764394   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0122 10:40:19.765409   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0122 10:40:19.766462   11012 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0122 10:40:19.767406   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0122 10:40:19.767406   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0122 10:40:19.767406   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.845419   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.845419   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.847410   11012 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0122 10:40:19.848587   11012 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0122 10:40:19.848587   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0122 10:40:19.848587   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.855740   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.855740   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.856743   11012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 10:40:19.857723   11012 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 10:40:19.857723   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 10:40:19.857723   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.861742   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.861742   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.863742   11012 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0122 10:40:19.870294   11012 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0122 10:40:19.870294   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0122 10:40:19.870842   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.876287   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.876287   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.876287   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.877291   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.880285   11012 addons.go:234] Setting addon default-storageclass=true in "addons-760600"
	I0122 10:40:19.880285   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:19.880285   11012 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-760600"
	I0122 10:40:19.880285   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:19.881286   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.881286   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:19.924102   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:19.924102   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:19.924102   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:20.262848   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:20.262848   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:20.279854   11012 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0122 10:40:20.271873   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:20.279854   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:20.288296   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:20.297000   11012 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0122 10:40:20.290004   11012 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0122 10:40:20.290004   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:20.303008   11012 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0122 10:40:20.303008   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0122 10:40:20.303008   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:20.325012   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0122 10:40:20.325012   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:21.364968   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:22.726406   11012 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0122 10:40:22.754408   11012 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0122 10:40:22.774408   11012 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0122 10:40:22.807398   11012 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0122 10:40:22.807398   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0122 10:40:22.807398   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:23.583434   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:24.548437   11012 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.8184785s)
	I0122 10:40:24.548437   11012 start.go:929] {"host.minikube.internal": 172.23.176.1} host record injected into CoreDNS's ConfigMap
	I0122 10:40:25.392884   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.392884   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.392884   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.413866   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.413866   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.414103   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.415417   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.415458   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.417363   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.430632   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.430632   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.430632   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.551157   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.551157   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.551157   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.658310   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:25.734301   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.734301   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.734301   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:25.893358   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:25.893358   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:25.893358   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:26.021911   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.021911   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.021911   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:26.025355   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.025458   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.025458   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:26.179176   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.179176   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.179176   11012 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 10:40:26.179176   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 10:40:26.179176   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:26.243519   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.243519   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.244720   11012 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0122 10:40:26.247343   11012 out.go:177]   - Using image docker.io/busybox:stable
	I0122 10:40:26.251353   11012 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0122 10:40:26.251353   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0122 10:40:26.251353   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:26.469799   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.470784   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.470784   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:26.471796   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:26.471796   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:26.472796   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:27.891432   11012 pod_ready.go:102] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"False"
	I0122 10:40:29.142896   11012 pod_ready.go:92] pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.142896   11012 pod_ready.go:81] duration metric: took 14.1727341s waiting for pod "coredns-5dd5756b68-67dgw" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.142896   11012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w5dfb" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.301258   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:29.301258   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:29.301258   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:29.369360   11012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0122 10:40:29.369360   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:29.440949   11012 pod_ready.go:92] pod "coredns-5dd5756b68-w5dfb" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.440949   11012 pod_ready.go:81] duration metric: took 298.0518ms waiting for pod "coredns-5dd5756b68-w5dfb" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.440949   11012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.493075   11012 pod_ready.go:92] pod "etcd-addons-760600" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.493075   11012 pod_ready.go:81] duration metric: took 52.1258ms waiting for pod "etcd-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.493075   11012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.527047   11012 pod_ready.go:92] pod "kube-apiserver-addons-760600" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.527047   11012 pod_ready.go:81] duration metric: took 33.972ms waiting for pod "kube-apiserver-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.527047   11012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.561327   11012 pod_ready.go:92] pod "kube-controller-manager-addons-760600" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.561327   11012 pod_ready.go:81] duration metric: took 34.2805ms waiting for pod "kube-controller-manager-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.561327   11012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jzbgc" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.596702   11012 pod_ready.go:92] pod "kube-proxy-jzbgc" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.596702   11012 pod_ready.go:81] duration metric: took 35.3739ms waiting for pod "kube-proxy-jzbgc" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.596702   11012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.654682   11012 pod_ready.go:92] pod "kube-scheduler-addons-760600" in "kube-system" namespace has status "Ready":"True"
	I0122 10:40:29.655687   11012 pod_ready.go:81] duration metric: took 58.9851ms waiting for pod "kube-scheduler-addons-760600" in "kube-system" namespace to be "Ready" ...
	I0122 10:40:29.655687   11012 pod_ready.go:38] duration metric: took 14.7615206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 10:40:29.655687   11012 api_server.go:52] waiting for apiserver process to appear ...
	I0122 10:40:29.712093   11012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 10:40:29.910032   11012 api_server.go:72] duration metric: took 16.1934878s to wait for apiserver process to appear ...
	I0122 10:40:29.910032   11012 api_server.go:88] waiting for apiserver healthz status ...
	I0122 10:40:29.910032   11012 api_server.go:253] Checking apiserver healthz at https://172.23.176.254:8443/healthz ...
	I0122 10:40:30.008077   11012 api_server.go:279] https://172.23.176.254:8443/healthz returned 200:
	ok
	I0122 10:40:30.023079   11012 api_server.go:141] control plane version: v1.28.4
	I0122 10:40:30.023079   11012 api_server.go:131] duration metric: took 113.0466ms to wait for apiserver health ...
	I0122 10:40:30.023079   11012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 10:40:30.058048   11012 system_pods.go:59] 7 kube-system pods found
	I0122 10:40:30.058048   11012 system_pods.go:61] "coredns-5dd5756b68-67dgw" [c338d490-2a11-4028-9cd6-8788b216f749] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "coredns-5dd5756b68-w5dfb" [606f17f7-796c-4f0b-a2ac-ae12f02d9fb7] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "etcd-addons-760600" [7805340b-6052-4d21-ba52-fc4503bc23a5] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "kube-apiserver-addons-760600" [8ff4dfbe-8b6b-4dd7-835c-5cc55d840b2c] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "kube-controller-manager-addons-760600" [c1d1437c-8476-48c6-9e36-3466710fe39e] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "kube-proxy-jzbgc" [e174acf5-7ed3-41f7-ac85-19bb96385e59] Running
	I0122 10:40:30.058048   11012 system_pods.go:61] "kube-scheduler-addons-760600" [f741216a-a265-48b6-881d-d0a6327dabbc] Running
	I0122 10:40:30.058048   11012 system_pods.go:74] duration metric: took 34.9691ms to wait for pod list to return data ...
	I0122 10:40:30.058048   11012 default_sa.go:34] waiting for default service account to be created ...
	I0122 10:40:30.087047   11012 default_sa.go:45] found service account: "default"
	I0122 10:40:30.087047   11012 default_sa.go:55] duration metric: took 28.9986ms for default service account to be created ...
	I0122 10:40:30.087047   11012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0122 10:40:30.177566   11012 system_pods.go:86] 7 kube-system pods found
	I0122 10:40:30.177566   11012 system_pods.go:89] "coredns-5dd5756b68-67dgw" [c338d490-2a11-4028-9cd6-8788b216f749] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "coredns-5dd5756b68-w5dfb" [606f17f7-796c-4f0b-a2ac-ae12f02d9fb7] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "etcd-addons-760600" [7805340b-6052-4d21-ba52-fc4503bc23a5] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "kube-apiserver-addons-760600" [8ff4dfbe-8b6b-4dd7-835c-5cc55d840b2c] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "kube-controller-manager-addons-760600" [c1d1437c-8476-48c6-9e36-3466710fe39e] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "kube-proxy-jzbgc" [e174acf5-7ed3-41f7-ac85-19bb96385e59] Running
	I0122 10:40:30.177566   11012 system_pods.go:89] "kube-scheduler-addons-760600" [f741216a-a265-48b6-881d-d0a6327dabbc] Running
	I0122 10:40:30.177566   11012 system_pods.go:126] duration metric: took 90.5186ms to wait for k8s-apps to be running ...
	I0122 10:40:30.177566   11012 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 10:40:30.203568   11012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 10:40:30.300938   11012 system_svc.go:56] duration metric: took 123.3718ms WaitForService to wait for kubelet.
	I0122 10:40:30.300938   11012 kubeadm.go:581] duration metric: took 16.5843925s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 10:40:30.300938   11012 node_conditions.go:102] verifying NodePressure condition ...
	I0122 10:40:30.432574   11012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 10:40:30.432574   11012 node_conditions.go:123] node cpu capacity is 2
	I0122 10:40:30.432574   11012 node_conditions.go:105] duration metric: took 131.6352ms to run NodePressure ...
	I0122 10:40:30.433576   11012 start.go:228] waiting for startup goroutines ...
	I0122 10:40:32.339931   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:32.339931   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.339931   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:32.368443   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:32.369436   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.369436   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:32.441973   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:32.441973   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.441973   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:32.498448   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:32.498448   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.498626   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:32.498626   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:32.498626   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.498626   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:32.673603   11012 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 10:40:32.673603   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0122 10:40:32.680676   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:32.680676   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.680676   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:32.783399   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:32.783399   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:32.785019   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:32.866468   11012 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0122 10:40:32.866563   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0122 10:40:32.894848   11012 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 10:40:32.894848   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 10:40:32.998182   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0122 10:40:33.064376   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:33.064376   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.065118   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:33.085261   11012 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0122 10:40:33.085261   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0122 10:40:33.119655   11012 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 10:40:33.119655   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 10:40:33.179397   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:33.179490   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.180214   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:33.273246   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:33.273246   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.274262   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:33.274262   11012 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0122 10:40:33.274262   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0122 10:40:33.382124   11012 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0122 10:40:33.382124   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0122 10:40:33.382124   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 10:40:33.395125   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:33.395125   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.395125   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:33.537312   11012 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0122 10:40:33.537392   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0122 10:40:33.546854   11012 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0122 10:40:33.546944   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0122 10:40:33.558850   11012 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0122 10:40:33.559407   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0122 10:40:33.600565   11012 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0122 10:40:33.600565   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0122 10:40:33.683441   11012 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0122 10:40:33.683441   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0122 10:40:33.711539   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0122 10:40:33.726549   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0122 10:40:33.726549   11012 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0122 10:40:33.726549   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0122 10:40:33.740756   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:33.740756   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.740853   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:33.792410   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0122 10:40:33.792410   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0122 10:40:33.792912   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0122 10:40:33.792912   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0122 10:40:33.882290   11012 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0122 10:40:33.882290   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0122 10:40:33.900279   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0122 10:40:33.904298   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 10:40:33.977279   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:33.977279   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:33.977279   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:34.003958   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0122 10:40:34.004056   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0122 10:40:34.021286   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:34.021286   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:34.022278   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:34.024277   11012 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 10:40:34.024277   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0122 10:40:34.144238   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 10:40:34.196320   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0122 10:40:34.240686   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:34.240773   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:34.241453   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:34.250003   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0122 10:40:34.250129   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0122 10:40:34.380291   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0122 10:40:34.380291   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0122 10:40:34.598556   11012 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0122 10:40:34.598556   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0122 10:40:34.649290   11012 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0122 10:40:34.649359   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0122 10:40:34.727676   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0122 10:40:34.818951   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0122 10:40:34.818951   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0122 10:40:34.832883   11012 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0122 10:40:34.832963   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0122 10:40:34.911474   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0122 10:40:35.044348   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0122 10:40:35.044348   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0122 10:40:35.067426   11012 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0122 10:40:35.067493   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0122 10:40:35.240128   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0122 10:40:35.240207   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0122 10:40:35.264454   11012 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0122 10:40:35.264454   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0122 10:40:35.491120   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0122 10:40:35.491120   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0122 10:40:35.500591   11012 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0122 10:40:35.500591   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0122 10:40:35.630778   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:35.630909   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:35.631295   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:35.661407   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:35.661407   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:35.662013   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:35.722903   11012 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0122 10:40:35.722977   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0122 10:40:35.765080   11012 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0122 10:40:35.765080   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0122 10:40:35.937168   11012 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0122 10:40:35.937168   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0122 10:40:35.942812   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0122 10:40:36.106282   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0122 10:40:36.220547   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0122 10:40:36.298681   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 10:40:36.451741   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:36.451794   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:36.451826   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:36.766291   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.7680921s)
	I0122 10:40:37.024802   11012 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0122 10:40:37.456644   11012 addons.go:234] Setting addon gcp-auth=true in "addons-760600"
	I0122 10:40:37.456644   11012 host.go:66] Checking if "addons-760600" exists ...
	I0122 10:40:37.457639   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:39.665414   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:39.665414   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:39.678613   11012 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0122 10:40:39.678613   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-760600 ).state
	I0122 10:40:41.186841   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.8045648s)
	I0122 10:40:41.186914   11012 addons.go:470] Verifying addon metrics-server=true in "addons-760600"
	I0122 10:40:41.979950   11012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:40:41.979950   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:41.979950   11012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-760600 ).networkadapters[0]).ipaddresses[0]
	I0122 10:40:42.249557   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.5228892s)
	I0122 10:40:42.249616   11012 addons.go:470] Verifying addon registry=true in "addons-760600"
	I0122 10:40:42.249616   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.3492987s)
	I0122 10:40:42.250555   11012 out.go:177] * Verifying registry addon...
	I0122 10:40:42.249778   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.3454426s)
	I0122 10:40:42.249975   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.1057012s)
	W0122 10:40:42.251224   11012 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0122 10:40:42.250117   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.0537613s)
	I0122 10:40:42.250192   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.5224822s)
	I0122 10:40:42.251399   11012 retry.go:31] will retry after 144.765322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0122 10:40:42.252929   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.5413515s)
	I0122 10:40:42.253844   11012 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-760600 service yakd-dashboard -n yakd-dashboard
	
	I0122 10:40:42.253844   11012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0122 10:40:42.272845   11012 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0122 10:40:42.272845   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:42.423899   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0122 10:40:42.775715   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:43.270177   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:43.804404   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:44.277583   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:44.723416   11012 main.go:141] libmachine: [stdout =====>] : 172.23.176.254
	
	I0122 10:40:44.723416   11012 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:40:44.723495   11012 sshutil.go:53] new ssh client: &{IP:172.23.176.254 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-760600\id_rsa Username:docker}
	I0122 10:40:44.763304   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:45.269757   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:45.773433   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:46.261314   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:46.774415   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:47.286310   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:47.773451   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:48.264192   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:48.773337   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:49.268892   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:49.527545   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.6159619s)
	I0122 10:40:49.527591   11012 addons.go:470] Verifying addon ingress=true in "addons-760600"
	I0122 10:40:49.528706   11012 out.go:177] * Verifying ingress addon...
	I0122 10:40:49.531494   11012 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0122 10:40:49.539614   11012 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0122 10:40:49.539736   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:49.769946   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:50.039138   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:50.266090   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:50.538061   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:50.781202   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:50.860839   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (14.7544907s)
	I0122 10:40:50.861147   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.640534s)
	I0122 10:40:50.861147   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.9182681s)
	I0122 10:40:50.861210   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.5624635s)
	I0122 10:40:50.861210   11012 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-760600"
	I0122 10:40:50.861210   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.4372734s)
	I0122 10:40:50.861210   11012 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (11.1825464s)
	I0122 10:40:50.862263   11012 out.go:177] * Verifying csi-hostpath-driver addon...
	I0122 10:40:50.863158   11012 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0122 10:40:50.864113   11012 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0122 10:40:50.864841   11012 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0122 10:40:50.864886   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0122 10:40:50.866704   11012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0122 10:40:50.912873   11012 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0122 10:40:50.912873   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0122 10:40:50.933595   11012 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0122 10:40:50.974976   11012 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0122 10:40:50.975049   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0122 10:40:51.054639   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:51.063466   11012 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0122 10:40:51.063529   11012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0122 10:40:51.161534   11012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0122 10:40:51.267966   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:51.376325   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:51.548392   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:51.777122   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:51.910246   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:52.052199   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:52.276192   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:52.399707   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:52.541196   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:52.771704   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:52.879132   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:53.049989   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:53.277049   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:53.393545   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:53.543016   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:53.762518   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:53.882303   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:54.079180   11012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.9176325s)
	I0122 10:40:54.086981   11012 addons.go:470] Verifying addon gcp-auth=true in "addons-760600"
	I0122 10:40:54.087976   11012 out.go:177] * Verifying gcp-auth addon...
	I0122 10:40:54.090971   11012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0122 10:40:54.098974   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:54.101978   11012 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0122 10:40:54.101978   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:54.266943   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:54.379628   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:54.551522   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:54.597986   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:54.775575   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:54.887334   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:55.046173   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:55.106382   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:55.267530   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:55.380111   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:55.540111   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:55.603029   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:55.777687   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:55.889715   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:56.044859   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:56.105700   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:56.265701   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:56.378004   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:56.551899   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:56.597782   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:56.774517   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:56.888071   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:57.044157   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:57.105570   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:57.267819   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:57.380895   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:57.553333   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:57.598949   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:57.761229   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:57.891959   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:58.047578   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:58.112287   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:58.267165   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:58.378265   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:58.545493   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:58.621406   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:58.765685   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:58.876064   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:59.056594   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:59.123467   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:59.278281   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:59.385932   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:40:59.542678   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:40:59.604166   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:40:59.773110   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:40:59.875168   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:00.048331   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:00.108476   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:00.274772   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:00.388775   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:00.662351   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:00.663020   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:00.822757   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:00.878563   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:01.047442   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:01.110094   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:01.269124   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:01.380828   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:01.555076   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:01.599363   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:01.776591   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:01.886360   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:02.043778   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:02.107575   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:02.266817   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:02.379401   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:02.551359   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:02.610252   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:02.769898   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:02.883798   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:03.054269   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:03.099011   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:03.278440   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:03.389160   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:03.545538   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:03.608565   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:03.767862   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:03.888360   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:04.053759   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:04.097198   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:04.274777   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:04.385249   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:04.545307   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:04.607339   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:04.767758   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:04.876174   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:05.050432   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:05.096549   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:05.274038   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:05.392056   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:05.543708   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:05.605687   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:05.764686   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:05.878044   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:06.048029   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:06.110866   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:06.271548   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:06.384751   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:06.540389   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:06.601842   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:06.762586   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:06.875070   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:07.047466   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:07.109202   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:07.269308   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:07.381181   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:07.539279   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:07.600383   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:07.761896   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:07.888084   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:08.044095   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:08.106675   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:08.265350   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:08.378450   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:08.551121   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:08.611506   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:08.771296   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:08.883711   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:09.040733   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:09.101322   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:09.262867   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:09.374271   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:09.547928   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:09.608730   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:09.772897   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:09.884568   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:10.038952   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:10.101907   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:10.277845   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:10.387404   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:10.546770   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:10.607541   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:10.769627   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:10.880980   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:11.047241   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:11.104177   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:11.275346   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:11.388334   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:11.545045   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:11.609139   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:11.771910   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:11.879854   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:12.049447   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:12.096430   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:12.277681   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:12.384184   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:12.540000   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:12.602427   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:12.765479   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:12.876433   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:13.049211   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:13.112670   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:13.269910   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:13.386906   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:13.541221   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:13.602734   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:13.775262   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:13.885937   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:14.042991   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:14.105453   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:14.266280   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:14.377431   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:14.551004   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:14.598685   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:14.825863   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:14.882974   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:15.040781   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:15.101494   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:15.277138   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:15.389079   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:15.545030   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:15.605047   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:15.766375   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:15.878706   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:16.051238   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:16.096701   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:16.297334   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:16.386398   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:16.543433   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:16.605661   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:16.769937   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:17.070608   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:17.072452   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:17.098716   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:17.264345   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:17.389504   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:17.546220   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:17.608196   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:17.768231   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:17.880079   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:18.051491   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:18.098898   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:18.274105   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:18.467908   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:18.540251   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:18.601139   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:18.773650   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:18.885135   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:19.038930   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:19.175301   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:19.372903   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:19.381330   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:19.552610   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:19.598856   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:19.773838   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:19.881451   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:20.051287   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:20.110313   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:20.271540   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:20.386338   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:20.544632   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:20.609826   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:20.787139   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:20.882883   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:21.054983   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:21.114832   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:21.272323   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:21.387445   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:21.551282   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:21.605232   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:21.765104   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:21.893693   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:22.088613   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:22.171845   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:22.280036   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:22.399688   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:22.544806   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:22.606839   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:22.767518   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:22.880502   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:23.053962   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:23.096757   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:23.272002   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:23.384894   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:23.541461   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:23.602955   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:23.762789   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:23.889808   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:24.046489   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:24.108658   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:24.267864   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:24.379286   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:24.553972   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:24.599604   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:24.776195   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:24.885814   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:25.044164   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:25.103964   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:25.266091   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:25.378123   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:25.552086   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:25.597186   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:25.772263   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:25.883854   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:26.038864   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:26.099647   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:26.278040   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:26.391594   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:26.546711   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:26.607750   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:26.768986   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:26.881552   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:27.039107   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:27.100683   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:27.275427   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:27.387664   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:27.544352   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:27.605475   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:27.767847   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:27.881573   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:28.039678   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:28.099871   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:28.277101   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:28.390665   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:28.546261   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:28.608144   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:28.769143   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:28.880528   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:29.039660   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:29.101497   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:29.261411   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:29.389641   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:29.548325   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:29.607517   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:29.768354   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:29.879324   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:30.050776   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:30.095457   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:30.273175   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:30.387581   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:30.540994   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:30.600083   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:30.763261   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:30.875984   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:31.050545   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:31.109074   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:31.270155   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:31.381191   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:31.538882   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:31.613713   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:31.763219   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:31.877052   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:32.046777   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:32.106740   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:32.266956   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:32.379692   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:32.618973   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:32.618973   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:32.768661   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:32.881782   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:33.039687   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:33.104787   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:33.265346   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:33.377540   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:33.548135   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:33.612943   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:33.771258   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:33.884427   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:34.040843   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:34.101791   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:34.263592   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:34.391070   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:34.675245   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:34.675245   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:34.760944   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:34.889167   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:35.044215   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:35.105353   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:35.263422   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:35.425710   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:35.545285   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:35.604473   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:35.762612   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:35.889695   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:36.046722   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:36.104475   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:36.267537   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:36.389648   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:36.552976   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:36.611021   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:36.979334   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:36.984562   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:37.051305   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:37.097279   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:37.273550   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:37.384158   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:37.546477   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:37.607286   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:37.772831   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:37.880931   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:38.054601   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:38.097879   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:38.274007   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:38.386113   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:38.538434   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:38.599893   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:38.777142   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:38.888417   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:39.040880   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:39.102052   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:39.274933   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:39.393929   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:39.560082   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:39.606003   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:39.770138   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:39.894440   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:40.051726   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:40.098159   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:40.271587   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:40.380777   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:40.539605   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:40.601988   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:40.777969   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:40.888780   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:41.047269   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:41.105957   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:41.264165   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:41.379580   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:41.547857   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:41.609287   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:41.768973   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:41.882322   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:42.052695   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:42.097950   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:42.275073   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:42.386140   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:42.544245   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:42.606705   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:42.777485   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:42.880548   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:43.054196   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:43.096795   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:43.274675   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:43.387533   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:43.544846   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:43.605987   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:43.767623   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:43.877449   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:44.052782   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:44.098188   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:44.277778   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:44.388745   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:44.546059   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:44.606327   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:44.760930   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:44.879320   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:45.050942   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:45.096989   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:45.273547   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:45.386916   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:45.546094   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:45.605577   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:45.764034   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:45.876349   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:46.047396   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:46.110493   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:46.270655   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:46.381895   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:46.541091   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:46.600731   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:46.775697   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:46.888247   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:47.043059   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:47.105853   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:47.266384   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:47.381235   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:47.551520   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:47.598553   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:47.773765   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:47.886062   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:48.043067   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:48.105808   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:48.269279   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:48.379814   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:48.553627   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:48.597835   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:48.772482   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:48.886842   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:49.039953   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:49.102898   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:49.264827   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:49.389562   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:49.665117   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:49.666212   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:49.776653   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:49.885826   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:50.040872   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:50.102883   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:50.276783   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:50.386849   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:50.541315   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:50.600436   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:50.772345   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:50.882344   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:51.053066   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:51.105239   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:51.274093   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:51.386611   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:51.542296   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:51.603056   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:51.779545   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0122 10:41:51.890761   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:52.046230   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:52.107206   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:52.268791   11012 kapi.go:107] duration metric: took 1m10.0146353s to wait for kubernetes.io/minikube-addons=registry ...
	I0122 10:41:52.383585   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:52.541480   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:52.604047   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:52.888995   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:53.045308   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:53.107729   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:53.379639   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:53.551494   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:53.596146   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:53.885350   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:54.043229   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:54.105976   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:54.378497   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:54.546555   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:54.607353   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:54.881975   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:55.050569   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:55.110558   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:55.384052   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:55.554369   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:55.599968   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:55.885788   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:56.039464   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:56.102647   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:56.376435   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:56.548088   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:56.609665   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:56.884550   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:57.040720   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:57.101595   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:57.389243   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:57.546204   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:57.606785   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:57.878550   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:58.168241   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:58.169388   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:58.380324   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:58.552419   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:58.598176   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:58.887425   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:59.044594   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:59.106209   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:59.381818   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:41:59.537172   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:41:59.601816   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:41:59.875528   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:00.041072   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:00.109624   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:00.426643   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:00.599137   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:00.599213   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:00.889636   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:01.044544   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:01.109091   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:01.382269   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:01.538276   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:01.599250   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:01.889151   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:02.045393   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:02.110053   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:02.385961   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:02.540760   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:02.601272   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:02.874853   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:03.049386   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:03.112280   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:03.382768   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:03.540297   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:03.600932   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:03.874755   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:04.047519   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:04.108615   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:04.383016   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:04.552925   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:04.596501   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:04.887432   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:05.044080   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:05.107693   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:05.389502   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:05.618661   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:05.619222   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:05.883755   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:06.048331   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:06.111620   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:06.384553   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:06.541211   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:06.602738   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:06.879538   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:07.052616   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:07.097633   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:07.387172   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:07.545310   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:07.606529   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:07.879933   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:08.052813   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:08.099162   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:08.388323   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:08.545081   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:08.603688   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:08.881309   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:09.068133   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:09.098554   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:09.384166   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:09.552853   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:09.598244   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:09.882050   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:10.055287   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:10.097508   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:10.383315   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:10.552762   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:10.597588   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:10.884787   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:11.043917   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:11.101443   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:11.394814   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:11.557415   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:11.605275   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:11.891276   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:12.051287   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:12.158961   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:12.416681   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:12.541206   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:12.606052   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:12.877424   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:13.051103   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:13.109484   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:13.382092   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:13.540714   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:13.602033   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:13.892810   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:14.046638   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:14.108664   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:14.381974   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:14.553107   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:14.599337   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:14.876010   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:15.048598   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:15.109185   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:15.383909   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:15.538922   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:15.600004   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:15.889376   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:16.045908   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:16.105572   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:16.377167   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:16.552138   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:16.609437   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:16.885320   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:17.045714   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:17.107451   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:17.378972   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:17.554266   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:17.595659   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:17.885543   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:18.042559   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:18.106454   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:18.380041   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:18.552982   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:18.599311   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:18.888808   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:19.045365   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:19.106742   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:19.380217   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:19.553613   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:19.598043   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:19.921246   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:20.219894   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:20.220760   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:20.383454   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:20.552023   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:20.597462   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:20.885299   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:21.053793   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:21.097401   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:21.386707   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:21.542527   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:21.602355   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:21.876510   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:22.049107   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:22.111435   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:22.385302   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:22.540109   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:22.600903   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:22.886959   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:23.041317   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:23.103910   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:23.375846   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:23.549068   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:23.611553   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:23.884984   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:24.042346   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:24.106067   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:24.383530   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:24.551684   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:24.597681   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:24.888711   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:25.039255   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:25.104283   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:25.390459   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:25.549049   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:25.606458   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:25.881016   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:26.047286   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:26.111265   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:26.384718   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:26.542875   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:26.602480   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:26.875737   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:27.048300   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:27.110908   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:27.382052   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:27.540022   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:27.602413   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:27.891272   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:28.045554   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:28.109083   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:28.378772   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:28.549105   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:28.609179   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:28.885345   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:29.040312   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:29.101965   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:29.376663   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:29.548421   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:29.610948   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:29.885640   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:30.049856   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:30.100814   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:30.391141   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:30.547007   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:30.608979   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:30.877404   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:31.045907   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:31.107558   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:31.378983   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:31.548181   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:31.611120   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:31.884828   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:32.041934   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:32.101987   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:32.392607   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:32.547882   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:32.610859   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:32.919298   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:33.039937   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:33.101830   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:33.376001   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:33.549166   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:33.609606   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:33.878067   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:34.055686   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:34.101897   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:34.386671   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:34.543727   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:34.606676   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:34.882624   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:35.054071   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:35.099951   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:35.385751   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:35.545383   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:35.607241   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:35.879873   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:36.038321   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:36.102548   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:36.389254   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:36.543398   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:36.605958   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:36.891013   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:37.046626   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:37.109089   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:37.382343   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:37.539289   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:37.601819   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:37.877385   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:38.052037   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:38.097654   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:38.384818   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:38.539263   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:38.603939   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:38.891129   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:39.047764   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:39.105936   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:39.378644   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:39.554001   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:39.596946   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:39.884125   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:40.040690   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:40.110203   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:40.376902   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:40.548228   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:40.612635   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:40.886246   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:41.127297   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:41.127628   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:41.401354   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:41.552083   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:41.597566   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:41.886125   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:42.046515   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:42.115771   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:42.386154   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:42.552071   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:42.599306   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:42.888631   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:43.040779   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:43.103770   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:43.390358   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:43.546341   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:43.605936   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:43.879671   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:44.050223   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:44.109287   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:44.383769   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:44.541374   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:44.601548   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:44.889119   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:45.047700   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:45.107027   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:45.384957   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:45.539770   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:45.599491   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:45.891425   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:46.051097   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:46.107125   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:46.378763   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:46.548159   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:46.611266   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:46.881977   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:47.040125   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:47.099101   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:47.389901   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:47.545410   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:47.606394   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:47.877748   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:48.048562   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:48.110408   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:48.376439   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:48.550238   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:48.614150   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:48.891912   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:49.040885   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:49.103065   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:49.376839   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:49.548173   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:49.609620   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:49.883776   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:50.053556   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:50.098638   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:50.392604   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:50.544224   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:50.604844   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:50.875341   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:51.048033   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:51.105958   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:51.377469   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:51.545160   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:51.605507   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:51.889454   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:52.047410   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:52.103815   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:52.386297   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:52.544806   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:52.609455   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:52.880305   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:53.052463   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:53.100942   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:53.388267   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:53.543015   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:53.607969   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:53.878647   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:54.049436   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:54.112814   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:54.381024   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:54.542828   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:54.601205   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:54.883704   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0122 10:42:55.057240   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:55.099337   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:55.395997   11012 kapi.go:107] duration metric: took 2m4.5287359s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0122 10:42:55.540395   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:55.604492   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:56.040483   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:56.102452   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:56.541259   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:56.608932   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:57.049300   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:57.110540   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:57.542273   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:57.606106   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:58.048545   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:58.108216   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:58.549747   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:58.611906   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:59.051521   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:59.098401   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:42:59.545377   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:42:59.607533   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:00.054684   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:00.099466   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:00.542316   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:00.600011   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:01.054334   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:01.097773   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:01.548445   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:01.610218   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:02.046165   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:02.109022   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:02.547839   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:02.609885   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:03.042865   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:03.103761   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:03.543713   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:03.603856   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:04.041449   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:04.102994   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:04.542019   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:04.601857   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:05.041095   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:05.099607   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:05.546380   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:05.609138   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:06.041351   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:06.103645   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:06.546355   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:06.606820   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:07.046083   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:07.115730   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:07.543036   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:07.604623   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:08.050499   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:08.111072   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:08.541720   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:08.603424   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:09.050481   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:09.096527   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:09.544553   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:09.607692   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:10.040331   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:10.104092   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:10.548957   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:10.609653   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:11.045579   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:11.105360   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:11.567648   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:11.600338   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:12.177242   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:12.177895   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:12.552712   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:12.597256   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:13.072491   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:13.122945   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:13.538349   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:13.601352   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:14.045100   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:14.107050   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:14.541403   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:14.601108   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:15.046702   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:15.107390   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:15.554281   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:15.599975   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:16.056250   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:16.100195   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:16.547101   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:16.608735   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:17.041322   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:17.104952   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:17.550412   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:17.611933   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:18.041580   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:18.103153   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:18.551758   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:18.610928   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:19.040827   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:19.101763   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:19.546105   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:19.606507   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:20.038949   11012 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0122 10:43:20.103843   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:20.549158   11012 kapi.go:107] duration metric: took 2m31.016915s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0122 10:43:20.610010   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:21.110117   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:21.604488   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:22.101421   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:22.611997   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:23.098586   11012 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0122 10:43:23.603255   11012 kapi.go:107] duration metric: took 2m29.5116142s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0122 10:43:23.603634   11012 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-760600 cluster.
	I0122 10:43:23.604294   11012 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0122 10:43:23.604850   11012 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0122 10:43:23.605562   11012 out.go:177] * Enabled addons: cloud-spanner, metrics-server, ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, yakd, inspektor-gadget, volumesnapshots, default-storageclass, registry, csi-hostpath-driver, ingress, gcp-auth
	I0122 10:43:23.606596   11012 addons.go:505] enable addons completed in 3m10.6637701s: enabled=[cloud-spanner metrics-server ingress-dns storage-provisioner nvidia-device-plugin helm-tiller yakd inspektor-gadget volumesnapshots default-storageclass registry csi-hostpath-driver ingress gcp-auth]
	I0122 10:43:23.606596   11012 start.go:233] waiting for cluster config update ...
	I0122 10:43:23.606596   11012 start.go:242] writing updated cluster config ...
	I0122 10:43:23.620457   11012 ssh_runner.go:195] Run: rm -f paused
	I0122 10:43:23.862700   11012 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0122 10:43:23.863708   11012 out.go:177] * Done! kubectl is now configured to use "addons-760600" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:38:02 UTC, ends at Mon 2024-01-22 10:44:20 UTC. --
	Jan 22 10:44:07 addons-760600 dockerd[1326]: time="2024-01-22T10:44:07.663964275Z" level=warning msg="cleaning up after shim disconnected" id=b707cce3220784b310f04e3a0880ab2bf0a330ea508c7fbeef0f97e034be4f8e namespace=moby
	Jan 22 10:44:07 addons-760600 dockerd[1326]: time="2024-01-22T10:44:07.664126275Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:44:07 addons-760600 dockerd[1326]: time="2024-01-22T10:44:07.909762352Z" level=info msg="shim disconnected" id=ac9f69ab1f4be9ccff639b85afb13c54dfd3e480b8d20beb0767272f4ad9f7b4 namespace=moby
	Jan 22 10:44:07 addons-760600 dockerd[1326]: time="2024-01-22T10:44:07.910136253Z" level=warning msg="cleaning up after shim disconnected" id=ac9f69ab1f4be9ccff639b85afb13c54dfd3e480b8d20beb0767272f4ad9f7b4 namespace=moby
	Jan 22 10:44:07 addons-760600 dockerd[1326]: time="2024-01-22T10:44:07.910398854Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:44:07 addons-760600 dockerd[1320]: time="2024-01-22T10:44:07.910646055Z" level=info msg="ignoring event" container=ac9f69ab1f4be9ccff639b85afb13c54dfd3e480b8d20beb0767272f4ad9f7b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:44:08 addons-760600 dockerd[1326]: time="2024-01-22T10:44:08.074432335Z" level=info msg="shim disconnected" id=0a71665b46369443f3cfb4127f48a1c28d33c74adde5ee3565d90793fe9616e7 namespace=moby
	Jan 22 10:44:08 addons-760600 dockerd[1326]: time="2024-01-22T10:44:08.074775736Z" level=warning msg="cleaning up after shim disconnected" id=0a71665b46369443f3cfb4127f48a1c28d33c74adde5ee3565d90793fe9616e7 namespace=moby
	Jan 22 10:44:08 addons-760600 dockerd[1320]: time="2024-01-22T10:44:08.075185437Z" level=info msg="ignoring event" container=0a71665b46369443f3cfb4127f48a1c28d33c74adde5ee3565d90793fe9616e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:44:08 addons-760600 dockerd[1326]: time="2024-01-22T10:44:08.075260438Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:44:12 addons-760600 cri-dockerd[1208]: time="2024-01-22T10:44:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0@sha256:2752323358af0f4c4277cce104a5eb07e4d2f695c8571df560165cda6da11b88: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2752323358af0f4c4277cce104a5eb07e4d2f695c8571df560165cda6da11b88"
	Jan 22 10:44:12 addons-760600 dockerd[1326]: time="2024-01-22T10:44:12.683012093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:44:12 addons-760600 dockerd[1326]: time="2024-01-22T10:44:12.683914696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:44:12 addons-760600 dockerd[1326]: time="2024-01-22T10:44:12.684263997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:44:12 addons-760600 dockerd[1326]: time="2024-01-22T10:44:12.685691602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:44:13 addons-760600 dockerd[1326]: time="2024-01-22T10:44:13.374638340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:44:13 addons-760600 dockerd[1326]: time="2024-01-22T10:44:13.374875040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:44:13 addons-760600 dockerd[1326]: time="2024-01-22T10:44:13.374912541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:44:13 addons-760600 dockerd[1326]: time="2024-01-22T10:44:13.375012041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:44:14 addons-760600 cri-dockerd[1208]: time="2024-01-22T10:44:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4a35f452ee4c5661d2fbe61b29d8c5f8ce7c28f1b8a7ef0d0ea7165fc92f3bea/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 22 10:44:14 addons-760600 dockerd[1326]: time="2024-01-22T10:44:14.349142661Z" level=info msg="shim disconnected" id=b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7 namespace=moby
	Jan 22 10:44:14 addons-760600 dockerd[1326]: time="2024-01-22T10:44:14.349254662Z" level=warning msg="cleaning up after shim disconnected" id=b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7 namespace=moby
	Jan 22 10:44:14 addons-760600 dockerd[1326]: time="2024-01-22T10:44:14.349273862Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:44:14 addons-760600 dockerd[1320]: time="2024-01-22T10:44:14.350734666Z" level=info msg="ignoring event" container=b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:44:14 addons-760600 dockerd[1320]: time="2024-01-22T10:44:14.468540640Z" level=warning msg="reference for unknown type: " digest="sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67" remote="ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b0b5938f890ef       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2752323358af0f4c4277cce104a5eb07e4d2f695c8571df560165cda6da11b88                            8 seconds ago        Exited              gadget                                   4                   d0a3b46799dc4       gadget-ntxzr
	4882b3f578a53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 58 seconds ago       Running             gcp-auth                                 0                   973b1b3c2642c       gcp-auth-d4c87556c-9cq7w
	8a4f07fd36a8a       registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e                             About a minute ago   Running             controller                               0                   982bc09c0e91b       ingress-nginx-controller-69cff4fd79-cm6zm
	3cb7f4634eecc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	2568b1c7b6ac5       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	ec30c1fe30f55       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	566ae3bad96e4       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	0dca2e2eb7d94       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	23450e087c555       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   c4d98e3faa462       csi-hostpath-attacher-0
	b0415802baf20       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              patch                                    0                   59ea87ba171ee       ingress-nginx-admission-patch-5xsrn
	1e50038d379e1       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   78f279a64de94       csi-hostpath-resizer-0
	8e93f9662a135       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              create                                   0                   2876f679d1d33       ingress-nginx-admission-create-glc57
	f67b11daf111c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   23d915b770ad2       csi-hostpathplugin-n5d7t
	e4dd4da722f0b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   a6511064f8880       local-path-provisioner-78b46b4d5c-tp4s4
	434378cf54af6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   fdef08ce57d47       snapshot-controller-58dbcc7b99-9whmx
	52ad79c645b8b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   bff9e18991fb4       snapshot-controller-58dbcc7b99-9wswl
	7677237371b7e       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   2f051488e2eb7       yakd-dashboard-9947fc6bf-brt5f
	e7d3d6a2bdc67       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   5b865f0b489db       tiller-deploy-7b677967b9-lxkrx
	a2af458efc067       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                                     2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   05cbad9df24fe       nvidia-device-plugin-daemonset-qf96s
	c7c0bb0da2520       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   1b19c13843fbc       kube-ingress-dns-minikube
	0a61c9fc54fde       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   e68ebfc04d65e       storage-provisioner
	989ad0135bb2e       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   e872a34e4c02b       coredns-5dd5756b68-67dgw
	b249474ac97b9       83f6cc407eed8                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   54b28724713e2       kube-proxy-jzbgc
	84d193bef9689       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   4e795eaa9df07       kube-scheduler-addons-760600
	04a9602bd3706       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   ddad7d0f7ab21       etcd-addons-760600
	c79e95012524b       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   07bb0928e7931       kube-controller-manager-addons-760600
	6926dd4054aef       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   746976aa6945f       kube-apiserver-addons-760600
	
	
	==> controller_ingress [8a4f07fd36a8] <==
	I0122 10:43:19.838398       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0122 10:43:20.151733       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0122 10:43:20.193068       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0122 10:43:20.218311       7 nginx.go:260] "Starting NGINX Ingress controller"
	I0122 10:43:20.249774       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b864e18f-4fac-492d-bc53-b17f6f46f92d", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0122 10:43:20.258145       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"150cc298-3567-4f3d-a820-7cfb57586ada", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0122 10:43:20.258188       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"57ba4f70-89cf-4504-8f47-24082a2381bd", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0122 10:43:21.420416       7 nginx.go:303] "Starting NGINX process"
	I0122 10:43:21.420757       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0122 10:43:21.421412       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0122 10:43:21.421780       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0122 10:43:21.434884       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0122 10:43:21.435287       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-69cff4fd79-cm6zm"
	I0122 10:43:21.447359       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69cff4fd79-cm6zm" node="addons-760600"
	I0122 10:43:21.564738       7 controller.go:210] "Backend successfully reloaded"
	I0122 10:43:21.564824       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0122 10:43:21.565262       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-cm6zm", UID:"dd2930a7-431c-49c5-9b00-b72ba82dd17f", APIVersion:"v1", ResourceVersion:"1227", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	NGINX Ingress controller
	  Release:       v1.9.5
	  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [989ad0135bb2] <==
	[INFO] plugin/reload: Running configuration SHA512 = a9e6d52ea38bb1d541e41f968d30641b2b346f270fd5cb95216254c1f2330bb2b9715ab07840a759252c526786b95ee2dc393b58c195fd2a54b587422985693b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48158 - 14854 "HINFO IN 5353359534008303110.8065716473580379901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058241104s
	[INFO] 10.244.0.7:37564 - 27005 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221699s
	[INFO] 10.244.0.7:37564 - 50529 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000859s
	[INFO] 10.244.0.7:46232 - 33640 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001134s
	[INFO] 10.244.0.7:46232 - 8311 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001415s
	[INFO] 10.244.0.7:41073 - 42013 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131399s
	[INFO] 10.244.0.7:41073 - 32514 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001082s
	[INFO] 10.244.0.7:46454 - 30663 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207499s
	[INFO] 10.244.0.7:46454 - 10689 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001348s
	[INFO] 10.244.0.7:39107 - 42729 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054899s
	[INFO] 10.244.0.7:33068 - 62133 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114699s
	[INFO] 10.244.0.7:60479 - 13079 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127799s
	[INFO] 10.244.0.7:36776 - 52213 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000462s
	[INFO] 10.244.0.22:59180 - 19292 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000337902s
	[INFO] 10.244.0.22:60731 - 33516 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225602s
	[INFO] 10.244.0.22:33498 - 38344 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306702s
	[INFO] 10.244.0.22:39161 - 21224 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136901s
	[INFO] 10.244.0.22:42472 - 32908 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000187101s
	[INFO] 10.244.0.22:43425 - 4745 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000320902s
	[INFO] 10.244.0.22:40878 - 32940 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001959414s
	[INFO] 10.244.0.22:38469 - 48695 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001885514s
	[INFO] 10.244.0.25:39481 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000754605s
	[INFO] 10.244.0.25:36886 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001097s
	
	
	==> describe nodes <==
	Name:               addons-760600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-760600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=addons-760600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_22T10_39_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-760600
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-760600"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 10:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-760600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 10:44:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 10:44:05 +0000   Mon, 22 Jan 2024 10:39:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 10:44:05 +0000   Mon, 22 Jan 2024 10:39:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 10:44:05 +0000   Mon, 22 Jan 2024 10:39:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 10:44:05 +0000   Mon, 22 Jan 2024 10:40:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.176.254
	  Hostname:    addons-760600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	System Info:
	  Machine ID:                 70a5b3ea78ac48329c0f52bf2d144f77
	  System UUID:                5be26138-8885-6340-bb1a-020ee36fd1cc
	  Boot ID:                    112f9d6a-8af0-451c-b626-6c1254313a4f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gadget                      gadget-ntxzr                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  gcp-auth                    gcp-auth-d4c87556c-9cq7w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  headlamp                    headlamp-7ddfbb94ff-wfpbq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-cm6zm    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-5dd5756b68-67dgw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m9s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 csi-hostpathplugin-n5d7t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-addons-760600                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-addons-760600                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-controller-manager-addons-760600        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-jzbgc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-addons-760600                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 nvidia-device-plugin-daemonset-qf96s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 snapshot-controller-58dbcc7b99-9whmx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 snapshot-controller-58dbcc7b99-9wswl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 tiller-deploy-7b677967b9-lxkrx               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  local-path-storage          local-path-provisioner-78b46b4d5c-tp4s4      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-brt5f               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s  kubelet          Node addons-760600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s  kubelet          Node addons-760600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s  kubelet          Node addons-760600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m20s  kubelet          Node addons-760600 status is now: NodeReady
	  Normal  RegisteredNode           4m10s  node-controller  Node addons-760600 event: Registered Node addons-760600 in Controller
	
	
	==> dmesg <==
	[  +0.425487] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.170258] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.186061] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.258420] systemd-fstab-generator[1200]: Ignoring "noauto" for root device
	[ +10.139038] systemd-fstab-generator[1310]: Ignoring "noauto" for root device
	[  +5.606449] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.416881] systemd-fstab-generator[1677]: Ignoring "noauto" for root device
	[  +0.678382] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.631542] systemd-fstab-generator[2590]: Ignoring "noauto" for root device
	[Jan22 10:40] hrtimer: interrupt took 1720755 ns
	[ +23.271244] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.236855] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.095113] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.317360] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.390562] kauditd_printk_skb: 38 callbacks suppressed
	[Jan22 10:41] kauditd_printk_skb: 2 callbacks suppressed
	[Jan22 10:42] kauditd_printk_skb: 18 callbacks suppressed
	[ +24.141076] kauditd_printk_skb: 26 callbacks suppressed
	[Jan22 10:43] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.835639] kauditd_printk_skb: 3 callbacks suppressed
	[ +17.331889] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.522193] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.861370] kauditd_printk_skb: 3 callbacks suppressed
	[Jan22 10:44] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.486495] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [04a9602bd370] <==
	{"level":"info","ts":"2024-01-22T10:41:36.973677Z","caller":"traceutil/trace.go:171","msg":"trace[1894268922] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"336.730994ms","start":"2024-01-22T10:41:36.636934Z","end":"2024-01-22T10:41:36.973665Z","steps":["trace[1894268922] 'process raft request'  (duration: 330.833423ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:41:36.973737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.091398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-22T10:41:36.973765Z","caller":"traceutil/trace.go:171","msg":"trace[1786914328] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:916; }","duration":"273.128098ms","start":"2024-01-22T10:41:36.700629Z","end":"2024-01-22T10:41:36.973758Z","steps":["trace[1786914328] 'agreement among raft nodes before linearized reading'  (duration: 273.064798ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:41:36.973766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-22T10:41:36.636923Z","time spent":"336.791994ms","remote":"127.0.0.1:58736","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3380,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:457 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3309 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"warn","ts":"2024-01-22T10:41:36.974026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.905318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81726"}
	{"level":"info","ts":"2024-01-22T10:41:36.97405Z","caller":"traceutil/trace.go:171","msg":"trace[1512104126] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:916; }","duration":"205.931618ms","start":"2024-01-22T10:41:36.768112Z","end":"2024-01-22T10:41:36.974044Z","steps":["trace[1512104126] 'agreement among raft nodes before linearized reading'  (duration: 205.779818ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:41:49.664576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.022953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-22T10:41:49.664641Z","caller":"traceutil/trace.go:171","msg":"trace[1044349184] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:939; }","duration":"120.126953ms","start":"2024-01-22T10:41:49.544503Z","end":"2024-01-22T10:41:49.66463Z","steps":["trace[1044349184] 'range keys from in-memory index tree'  (duration: 119.774455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:41:58.165974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.14383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-22T10:41:58.16682Z","caller":"traceutil/trace.go:171","msg":"trace[356986648] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:964; }","duration":"115.979927ms","start":"2024-01-22T10:41:58.050811Z","end":"2024-01-22T10:41:58.166791Z","steps":["trace[356986648] 'range keys from in-memory index tree'  (duration: 115.02923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:42:19.916486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.969648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-22T10:42:19.916694Z","caller":"traceutil/trace.go:171","msg":"trace[644217086] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1005; }","duration":"143.207247ms","start":"2024-01-22T10:42:19.773452Z","end":"2024-01-22T10:42:19.91666Z","steps":["trace[644217086] 'range keys from in-memory index tree'  (duration: 142.751249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:42:20.217955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.024415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-22T10:42:20.21803Z","caller":"traceutil/trace.go:171","msg":"trace[700901575] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1006; }","duration":"117.117314ms","start":"2024-01-22T10:42:20.100885Z","end":"2024-01-22T10:42:20.218003Z","steps":["trace[700901575] 'range keys from in-memory index tree'  (duration: 116.874616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:42:20.218496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.622587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-22T10:42:20.218525Z","caller":"traceutil/trace.go:171","msg":"trace[792422154] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1006; }","duration":"169.656887ms","start":"2024-01-22T10:42:20.048862Z","end":"2024-01-22T10:42:20.218519Z","steps":["trace[792422154] 'range keys from in-memory index tree'  (duration: 169.231387ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-22T10:43:12.173896Z","caller":"traceutil/trace.go:171","msg":"trace[1256874382] linearizableReadLoop","detail":"{readStateIndex:1264; appliedIndex:1263; }","duration":"130.18095ms","start":"2024-01-22T10:43:12.043636Z","end":"2024-01-22T10:43:12.173816Z","steps":["trace[1256874382] 'read index received'  (duration: 33.428096ms)","trace[1256874382] 'applied index is now lower than readState.Index'  (duration: 96.751954ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-22T10:43:12.173992Z","caller":"traceutil/trace.go:171","msg":"trace[1914257016] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"152.522246ms","start":"2024-01-22T10:43:12.02146Z","end":"2024-01-22T10:43:12.173982Z","steps":["trace[1914257016] 'process raft request'  (duration: 148.556011ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:43:12.174132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.525853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13906"}
	{"level":"info","ts":"2024-01-22T10:43:12.174173Z","caller":"traceutil/trace.go:171","msg":"trace[653934006] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1208; }","duration":"130.563353ms","start":"2024-01-22T10:43:12.043587Z","end":"2024-01-22T10:43:12.174151Z","steps":["trace[653934006] 'agreement among raft nodes before linearized reading'  (duration: 130.478152ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-22T10:43:46.927526Z","caller":"traceutil/trace.go:171","msg":"trace[1334134512] transaction","detail":"{read_only:false; response_revision:1385; number_of_response:1; }","duration":"181.780721ms","start":"2024-01-22T10:43:46.74573Z","end":"2024-01-22T10:43:46.92751Z","steps":["trace[1334134512] 'process raft request'  (duration: 181.65852ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:43:57.770949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.206755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.23.176.254\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-22T10:43:57.771082Z","caller":"traceutil/trace.go:171","msg":"trace[48753654] range","detail":"{range_begin:/registry/masterleases/172.23.176.254; range_end:; response_count:1; response_revision:1420; }","duration":"155.346555ms","start":"2024-01-22T10:43:57.615721Z","end":"2024-01-22T10:43:57.771068Z","steps":["trace[48753654] 'range keys from in-memory index tree'  (duration: 155.094254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-22T10:44:21.019878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.472395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2024-01-22T10:44:21.01996Z","caller":"traceutil/trace.go:171","msg":"trace[129401403] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1556; }","duration":"103.558895ms","start":"2024-01-22T10:44:20.91638Z","end":"2024-01-22T10:44:21.019938Z","steps":["trace[129401403] 'range keys from in-memory index tree'  (duration: 103.366795ms)"],"step_count":1}
	
	
	==> gcp-auth [4882b3f578a5] <==
	2024/01/22 10:43:22 GCP Auth Webhook started!
	2024/01/22 10:43:24 Ready to marshal response ...
	2024/01/22 10:43:24 Ready to write response ...
	2024/01/22 10:43:24 Ready to marshal response ...
	2024/01/22 10:43:24 Ready to write response ...
	2024/01/22 10:43:34 Ready to marshal response ...
	2024/01/22 10:43:34 Ready to write response ...
	2024/01/22 10:43:41 Ready to marshal response ...
	2024/01/22 10:43:41 Ready to write response ...
	2024/01/22 10:43:47 Ready to marshal response ...
	2024/01/22 10:43:47 Ready to write response ...
	2024/01/22 10:44:12 Ready to marshal response ...
	2024/01/22 10:44:12 Ready to write response ...
	2024/01/22 10:44:12 Ready to marshal response ...
	2024/01/22 10:44:12 Ready to write response ...
	2024/01/22 10:44:12 Ready to marshal response ...
	2024/01/22 10:44:12 Ready to write response ...
	2024/01/22 10:44:19 Ready to marshal response ...
	2024/01/22 10:44:19 Ready to write response ...
	
	
	==> kernel <==
	 10:44:21 up 6 min,  0 users,  load average: 3.95, 2.73, 1.26
	Linux addons-760600 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6926dd4054ae] <==
	I0122 10:40:50.792240       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.216.182"}
	W0122 10:40:51.822279       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0122 10:40:53.821011       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.222.236"}
	I0122 10:40:55.624259       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0122 10:41:42.139278       1 handler_proxy.go:93] no RequestInfo found in the context
	E0122 10:41:42.139371       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0122 10:41:42.139381       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 10:41:42.143846       1 handler_proxy.go:93] no RequestInfo found in the context
	E0122 10:41:42.143931       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0122 10:41:42.143945       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 10:41:55.624075       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0122 10:42:00.442569       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.19.114:443: connect: connection refused
	W0122 10:42:00.444607       1 handler_proxy.go:93] no RequestInfo found in the context
	E0122 10:42:00.444666       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0122 10:42:00.445603       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0122 10:42:00.445751       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.19.114:443: connect: connection refused
	E0122 10:42:00.448931       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.19.114:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.19.114:443: connect: connection refused
	I0122 10:42:00.582813       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0122 10:42:55.629838       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0122 10:44:01.479539       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0122 10:44:06.290886       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0122 10:44:12.753589       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.69.239"}
	
	
	==> kube-controller-manager [c79e95012524] <==
	I0122 10:43:12.176708       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0122 10:43:12.242505       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0122 10:43:12.245456       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0122 10:43:20.071988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="98.401µs"
	I0122 10:43:23.176970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="17.226228ms"
	I0122 10:43:23.178365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="51.7µs"
	I0122 10:43:24.290079       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0122 10:43:24.392345       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:43:24.758323       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:43:26.452002       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:43:26.452054       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:43:35.450235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="34.70871ms"
	I0122 10:43:35.451780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="171.301µs"
	I0122 10:43:39.878277       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:43:45.617009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="14.3µs"
	I0122 10:43:58.270370       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="9.6µs"
	I0122 10:44:07.303113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="5.4µs"
	I0122 10:44:09.709957       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:44:11.453565       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0122 10:44:12.836916       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-7ddfbb94ff to 1"
	I0122 10:44:12.888541       1 event.go:307] "Event occurred" object="headlamp/headlamp-7ddfbb94ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-7ddfbb94ff-wfpbq"
	I0122 10:44:12.905954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="68.268124ms"
	I0122 10:44:12.971543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="61.829803ms"
	I0122 10:44:12.971959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="75.5µs"
	I0122 10:44:18.791499       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [b249474ac97b] <==
	I0122 10:40:15.327031       1 server_others.go:69] "Using iptables proxy"
	I0122 10:40:15.588744       1 node.go:141] Successfully retrieved node IP: 172.23.176.254
	I0122 10:40:20.867720       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0122 10:40:20.867761       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 10:40:21.065675       1 server_others.go:152] "Using iptables Proxier"
	I0122 10:40:21.065763       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0122 10:40:21.066153       1 server.go:846] "Version info" version="v1.28.4"
	I0122 10:40:21.066174       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 10:40:21.095347       1 config.go:188] "Starting service config controller"
	I0122 10:40:21.095424       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0122 10:40:21.095463       1 config.go:97] "Starting endpoint slice config controller"
	I0122 10:40:21.095473       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0122 10:40:21.096300       1 config.go:315] "Starting node config controller"
	I0122 10:40:21.096316       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0122 10:40:21.251932       1 shared_informer.go:318] Caches are synced for node config
	I0122 10:40:21.251978       1 shared_informer.go:318] Caches are synced for service config
	I0122 10:40:21.252032       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [84d193bef968] <==
	E0122 10:39:55.831633       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 10:39:55.831916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0122 10:39:55.831996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 10:39:55.832576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 10:39:55.832798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 10:39:55.833419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0122 10:39:56.660653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 10:39:56.660685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0122 10:39:56.662355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0122 10:39:56.662382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0122 10:39:56.692545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 10:39:56.693072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0122 10:39:56.758984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0122 10:39:56.759262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0122 10:39:56.810826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 10:39:56.810860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0122 10:39:56.857934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 10:39:56.858045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0122 10:39:57.040536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 10:39:57.040643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0122 10:39:57.159099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 10:39:57.159152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0122 10:39:57.240414       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 10:39:57.240866       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0122 10:39:59.704125       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:38:02 UTC, ends at Mon 2024-01-22 10:44:22 UTC. --
	Jan 22 10:44:12 addons-760600 kubelet[2610]: I0122 10:44:12.898946    2610 memory_manager.go:346] "RemoveStaleState removing state" podUID="2ede04f9-dbc4-4c94-9983-e4554447d6eb" containerName="cloud-spanner-emulator"
	Jan 22 10:44:12 addons-760600 kubelet[2610]: I0122 10:44:12.898959    2610 memory_manager.go:346] "RemoveStaleState removing state" podUID="fdc1f6a9-17c8-4ac9-9903-bf92cd1f2111" containerName="helper-pod"
	Jan 22 10:44:13 addons-760600 kubelet[2610]: I0122 10:44:13.040319    2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9edaf3a8-3410-4050-9173-2d8eb81ee652-gcp-creds\") pod \"headlamp-7ddfbb94ff-wfpbq\" (UID: \"9edaf3a8-3410-4050-9173-2d8eb81ee652\") " pod="headlamp/headlamp-7ddfbb94ff-wfpbq"
	Jan 22 10:44:13 addons-760600 kubelet[2610]: I0122 10:44:13.042134    2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4t2\" (UniqueName: \"kubernetes.io/projected/9edaf3a8-3410-4050-9173-2d8eb81ee652-kube-api-access-6x4t2\") pod \"headlamp-7ddfbb94ff-wfpbq\" (UID: \"9edaf3a8-3410-4050-9173-2d8eb81ee652\") " pod="headlamp/headlamp-7ddfbb94ff-wfpbq"
	Jan 22 10:44:14 addons-760600 kubelet[2610]: I0122 10:44:14.189419    2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a35f452ee4c5661d2fbe61b29d8c5f8ce7c28f1b8a7ef0d0ea7165fc92f3bea"
	Jan 22 10:44:14 addons-760600 kubelet[2610]: E0122 10:44:14.843837    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:14 addons-760600 kubelet[2610]: E0122 10:44:14.854943    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:14 addons-760600 kubelet[2610]: E0122 10:44:14.859310    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.093395    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.096145    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.098175    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.100387    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.102587    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.104515    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.106770    2610 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7)" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 22 10:44:15 addons-760600 kubelet[2610]: I0122 10:44:15.232592    2610 scope.go:117] "RemoveContainer" containerID="40a3afabe1cf06c2c45d5e61192fa607794540396254261ff4b35a836e6c96fe"
	Jan 22 10:44:15 addons-760600 kubelet[2610]: I0122 10:44:15.233377    2610 scope.go:117] "RemoveContainer" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7"
	Jan 22 10:44:15 addons-760600 kubelet[2610]: E0122 10:44:15.235887    2610 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-ntxzr_gadget(3882ea5e-b242-4ef6-b1c8-ecc83c0a05ff)\"" pod="gadget/gadget-ntxzr" podUID="3882ea5e-b242-4ef6-b1c8-ecc83c0a05ff"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.335169    2610 topology_manager.go:215] "Topology Admit Handler" podUID="ac80a79e-e436-44c7-9a5e-07c13d286f97" podNamespace="default" podName="task-pv-pod-restore"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.497033    2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6ee4bed-82b4-4719-8cf3-7aebdaf9fe37\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^31eba91a-b913-11ee-9e0c-e6df7cd6f000\") pod \"task-pv-pod-restore\" (UID: \"ac80a79e-e436-44c7-9a5e-07c13d286f97\") " pod="default/task-pv-pod-restore"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.497087    2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rv8\" (UniqueName: \"kubernetes.io/projected/ac80a79e-e436-44c7-9a5e-07c13d286f97-kube-api-access-j6rv8\") pod \"task-pv-pod-restore\" (UID: \"ac80a79e-e436-44c7-9a5e-07c13d286f97\") " pod="default/task-pv-pod-restore"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.497123    2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac80a79e-e436-44c7-9a5e-07c13d286f97-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"ac80a79e-e436-44c7-9a5e-07c13d286f97\") " pod="default/task-pv-pod-restore"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.687040    2610 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-f6ee4bed-82b4-4719-8cf3-7aebdaf9fe37\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^31eba91a-b913-11ee-9e0c-e6df7cd6f000\") pod \"task-pv-pod-restore\" (UID: \"ac80a79e-e436-44c7-9a5e-07c13d286f97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/532a434382b278d63ffa4276be7f7448e8f4748225bba5181bc749ea39299f1d/globalmount\"" pod="default/task-pv-pod-restore"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: I0122 10:44:19.830904    2610 scope.go:117] "RemoveContainer" containerID="b0b5938f890eff9a84317ed81848aaa83a46586316151377acef824b843f40d7"
	Jan 22 10:44:19 addons-760600 kubelet[2610]: E0122 10:44:19.831585    2610 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-ntxzr_gadget(3882ea5e-b242-4ef6-b1c8-ecc83c0a05ff)\"" pod="gadget/gadget-ntxzr" podUID="3882ea5e-b242-4ef6-b1c8-ecc83c0a05ff"
	
	
	==> storage-provisioner [0a61c9fc54fd] <==
	I0122 10:40:48.403457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0122 10:40:48.421872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0122 10:40:48.421918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0122 10:40:48.452884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0122 10:40:48.457648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d01d76ab-f288-417e-8f36-bdedf2a8b778", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-760600_5bbb2128-8185-4cde-b5a4-58f2790462d6 became leader
	I0122 10:40:48.457723       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-760600_5bbb2128-8185-4cde-b5a4-58f2790462d6!
	I0122 10:40:48.560404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-760600_5bbb2128-8185-4cde-b5a4-58f2790462d6!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:44:12.115302   14320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-760600 -n addons-760600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-760600 -n addons-760600: (13.3703958s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-760600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-glc57 ingress-nginx-admission-patch-5xsrn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-760600 describe pod nginx ingress-nginx-admission-create-glc57 ingress-nginx-admission-patch-5xsrn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-760600 describe pod nginx ingress-nginx-admission-create-glc57 ingress-nginx-admission-patch-5xsrn: exit status 1 (276.103ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-760600/172.23.176.254
	Start Time:       Mon, 22 Jan 2024 10:44:35 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m7bc (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-6m7bc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/nginx to addons-760600
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-glc57" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5xsrn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-760600 describe pod nginx ingress-nginx-admission-create-glc57 ingress-nginx-admission-patch-5xsrn: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.34s)

                                                
                                    
x
+
TestForceSystemdFlag (482.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-487400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-487400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: exit status 90 (5m48.2439915s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-487400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node force-systemd-flag-487400 in cluster force-systemd-flag-487400
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:50:48.439328    2092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0122 12:50:48.533806    2092 out.go:296] Setting OutFile to fd 916 ...
	I0122 12:50:48.533806    2092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:50:48.533806    2092 out.go:309] Setting ErrFile to fd 920...
	I0122 12:50:48.533806    2092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:50:48.566799    2092 out.go:303] Setting JSON to false
	I0122 12:50:48.569816    2092 start.go:128] hostinfo: {"hostname":"minikube7","uptime":47217,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 12:50:48.570812    2092 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 12:50:48.571807    2092 out.go:177] * [force-systemd-flag-487400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 12:50:48.572808    2092 notify.go:220] Checking for updates...
	I0122 12:50:48.573804    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:50:48.575807    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 12:50:48.576815    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 12:50:48.577823    2092 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 12:50:48.578809    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 12:50:48.580818    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 12:50:55.406486    2092 out.go:177] * Using the hyperv driver based on user configuration
	I0122 12:50:55.407255    2092 start.go:298] selected driver: hyperv
	I0122 12:50:55.407255    2092 start.go:902] validating driver "hyperv" against <nil>
	I0122 12:50:55.407255    2092 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 12:50:55.461535    2092 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 12:50:55.462534    2092 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 12:50:55.462534    2092 cni.go:84] Creating CNI manager for ""
	I0122 12:50:55.462534    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 12:50:55.462534    2092 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 12:50:55.462534    2092 start_flags.go:321] config:
	{Name:force-systemd-flag-487400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-487400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:50:55.463534    2092 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:50:55.464534    2092 out.go:177] * Starting control plane node force-systemd-flag-487400 in cluster force-systemd-flag-487400
	I0122 12:50:55.464534    2092 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:50:55.465535    2092 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 12:50:55.465535    2092 cache.go:56] Caching tarball of preloaded images
	I0122 12:50:55.465535    2092 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:50:55.465535    2092 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:50:55.466535    2092 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-487400\config.json ...
	I0122 12:50:55.466535    2092 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-487400\config.json: {Name:mk519b2ba82d854045045154c983f26041c61a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:50:55.467535    2092 start.go:365] acquiring machines lock for force-systemd-flag-487400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:53:05.840234    2092 start.go:369] acquired machines lock for "force-systemd-flag-487400" in 2m10.3720892s
	I0122 12:53:05.840286    2092 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-487400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-487400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:53:05.843262    2092 start.go:125] createHost starting for "" (driver="hyperv")
	I0122 12:53:05.846242    2092 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0122 12:53:05.847000    2092 start.go:159] libmachine.API.Create for "force-systemd-flag-487400" (driver="hyperv")
	I0122 12:53:05.847067    2092 client.go:168] LocalClient.Create starting
	I0122 12:53:05.847503    2092 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0122 12:53:05.847503    2092 main.go:141] libmachine: Decoding PEM data...
	I0122 12:53:05.847503    2092 main.go:141] libmachine: Parsing certificate...
	I0122 12:53:05.848224    2092 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0122 12:53:05.848224    2092 main.go:141] libmachine: Decoding PEM data...
	I0122 12:53:05.848224    2092 main.go:141] libmachine: Parsing certificate...
	I0122 12:53:05.848224    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0122 12:53:07.832343    2092 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0122 12:53:07.832526    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:07.832526    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0122 12:53:09.694895    2092 main.go:141] libmachine: [stdout =====>] : False
	
	I0122 12:53:09.695150    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:09.695288    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:53:11.318469    2092 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:53:11.318676    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:11.318835    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:53:15.104776    2092 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:53:15.105009    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:15.108294    2092 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0122 12:53:15.550918    2092 main.go:141] libmachine: Creating SSH key...
	I0122 12:53:15.706852    2092 main.go:141] libmachine: Creating VM...
	I0122 12:53:15.706852    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:53:18.795799    2092 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:53:18.795977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:18.796044    2092 main.go:141] libmachine: Using switch "Default Switch"
	I0122 12:53:18.796044    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:53:20.700768    2092 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:53:20.700768    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:20.700768    2092 main.go:141] libmachine: Creating VHD
	I0122 12:53:20.700768    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0122 12:53:24.866656    2092 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B35CD382-92EA-4C51-AFB1-0608A925F350
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0122 12:53:24.866723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:24.866723    2092 main.go:141] libmachine: Writing magic tar header
	I0122 12:53:24.866794    2092 main.go:141] libmachine: Writing SSH key tar header
	I0122 12:53:24.875875    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0122 12:53:28.258842    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:28.258842    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:28.259065    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\disk.vhd' -SizeBytes 20000MB
	I0122 12:53:30.996437    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:30.996507    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:30.996507    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-flag-487400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0122 12:53:35.963614    2092 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	force-systemd-flag-487400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0122 12:53:35.963763    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:35.963763    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-flag-487400 -DynamicMemoryEnabled $false
	I0122 12:53:38.320016    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:38.320308    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:38.320308    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-487400 -Count 2
	I0122 12:53:40.573932    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:40.573996    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:40.574099    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-487400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\boot2docker.iso'
	I0122 12:53:43.281023    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:43.281023    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:43.281023    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-487400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\disk.vhd'
	I0122 12:53:46.024486    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:46.024486    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:46.024486    2092 main.go:141] libmachine: Starting VM...
	I0122 12:53:46.024593    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-487400
	I0122 12:53:49.010128    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:49.010548    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:49.010548    2092 main.go:141] libmachine: Waiting for host to start...
	I0122 12:53:49.010668    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:53:51.380415    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:53:51.380505    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:51.380577    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:53:53.991681    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:53.991833    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:54.992220    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:53:57.322173    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:53:57.322258    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:53:57.322337    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:53:59.960182    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:53:59.960182    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:00.960906    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:03.262975    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:03.262975    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:03.262975    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:05.886111    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:54:05.886151    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:06.900597    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:09.567948    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:09.568186    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:09.568293    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:12.470531    2092 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:54:12.470531    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:13.475928    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:15.956997    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:15.956997    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:15.956997    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:18.691450    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:18.691541    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:18.691670    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:20.942682    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:20.942899    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:20.942899    2092 machine.go:88] provisioning docker machine ...
	I0122 12:54:20.943016    2092 buildroot.go:166] provisioning hostname "force-systemd-flag-487400"
	I0122 12:54:20.943016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:23.237525    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:23.237525    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:23.237615    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:25.926236    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:25.926236    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:25.930876    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:54:25.943080    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:54:25.943080    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-487400 && echo "force-systemd-flag-487400" | sudo tee /etc/hostname
	I0122 12:54:26.112245    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-487400
	
	I0122 12:54:26.112245    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:28.375273    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:28.375487    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:28.375561    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:31.090212    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:31.090378    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:31.096024    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:54:31.096847    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:54:31.097371    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-487400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-487400/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-487400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:54:31.250258    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:54:31.250258    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:54:31.250258    2092 buildroot.go:174] setting up certificates
	I0122 12:54:31.250258    2092 provision.go:83] configureAuth start
	I0122 12:54:31.251207    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:33.504059    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:33.504059    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:33.504145    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:36.128923    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:36.129099    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:36.129389    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:38.269397    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:38.269397    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:38.269504    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:40.883508    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:40.883508    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:40.883508    2092 provision.go:138] copyHostCerts
	I0122 12:54:40.883508    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:54:40.884211    2092 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:54:40.884211    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:54:40.884211    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:54:40.886064    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:54:40.886228    2092 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:54:40.886228    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:54:40.886785    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:54:40.887787    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:54:40.887787    2092 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:54:40.887787    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:54:40.888760    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:54:40.890090    2092 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-487400 san=[172.23.179.67 172.23.179.67 localhost 127.0.0.1 minikube force-systemd-flag-487400]
	I0122 12:54:41.361140    2092 provision.go:172] copyRemoteCerts
	I0122 12:54:41.376176    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:54:41.376284    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:43.578461    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:43.578599    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:43.578599    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:46.265609    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:46.265609    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:46.266344    2092 sshutil.go:53] new ssh client: &{IP:172.23.179.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\id_rsa Username:docker}
	I0122 12:54:46.373143    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9969132s)
	I0122 12:54:46.373143    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:54:46.373940    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:54:46.415494    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:54:46.416026    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0122 12:54:46.457522    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:54:46.458114    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:54:46.497317    2092 provision.go:86] duration metric: configureAuth took 15.246992s
	I0122 12:54:46.497317    2092 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:54:46.498246    2092 config.go:182] Loaded profile config "force-systemd-flag-487400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:54:46.498376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:48.745743    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:48.745889    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:48.745889    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:51.398972    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:51.398972    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:51.405562    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:54:51.406165    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:54:51.406165    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:54:51.547451    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:54:51.547451    2092 buildroot.go:70] root file system type: tmpfs
	I0122 12:54:51.547710    2092 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:54:51.547853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:53.801150    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:53.801150    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:53.801259    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:54:56.458087    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:54:56.458289    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:56.464200    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:54:56.464785    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:54:56.464960    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:54:56.626008    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:54:56.626008    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:54:58.793451    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:54:58.793810    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:54:58.793943    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:01.432745    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:01.433153    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:01.442047    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:55:01.442749    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:55:01.442749    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:55:02.484773    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:55:02.484773    2092 machine.go:91] provisioned docker machine in 41.5416921s
	I0122 12:55:02.484773    2092 client.go:171] LocalClient.Create took 1m56.6371985s
	I0122 12:55:02.484773    2092 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-487400" took 1m56.6372656s
	I0122 12:55:02.484773    2092 start.go:300] post-start starting for "force-systemd-flag-487400" (driver="hyperv")
	I0122 12:55:02.484773    2092 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:55:02.496763    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:55:02.496763    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:04.768932    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:04.769109    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:04.769170    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:07.494672    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:07.494672    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:07.494672    2092 sshutil.go:53] new ssh client: &{IP:172.23.179.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\id_rsa Username:docker}
	I0122 12:55:07.604294    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1075086s)
	I0122 12:55:07.619124    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:55:07.625828    2092 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:55:07.625828    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:55:07.625828    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:55:07.627302    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:55:07.627302    2092 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:55:07.641568    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:55:07.658221    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:55:07.696778    2092 start.go:303] post-start completed in 5.2119408s
	I0122 12:55:07.699466    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:10.020135    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:10.020378    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:10.020530    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:12.804776    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:12.805027    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:12.805267    2092 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-487400\config.json ...
	I0122 12:55:12.808179    2092 start.go:128] duration metric: createHost completed in 2m6.9642654s
	I0122 12:55:12.808281    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:15.145340    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:15.145478    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:15.145478    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:17.818562    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:17.818828    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:17.823759    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:55:17.824453    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:55:17.824453    2092 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 12:55:17.966267    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705928117.966901614
	
	I0122 12:55:17.966267    2092 fix.go:206] guest clock: 1705928117.966901614
	I0122 12:55:17.966267    2092 fix.go:219] Guest: 2024-01-22 12:55:17.966901614 +0000 UTC Remote: 2024-01-22 12:55:12.8081791 +0000 UTC m=+264.490076201 (delta=5.158722514s)
	I0122 12:55:17.966267    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:20.667835    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:20.667835    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:20.667835    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:23.285226    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:23.285226    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:23.290656    2092 main.go:141] libmachine: Using SSH client type: native
	I0122 12:55:23.291426    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.179.67 22 <nil> <nil>}
	I0122 12:55:23.291426    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705928117
	I0122 12:55:23.439524    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:55:17 UTC 2024
	
	I0122 12:55:23.439593    2092 fix.go:226] clock set: Mon Jan 22 12:55:17 UTC 2024
	 (err=<nil>)
	I0122 12:55:23.439593    2092 start.go:83] releasing machines lock for "force-systemd-flag-487400", held for 2m17.5987069s
	I0122 12:55:23.439883    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:25.702145    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:25.702360    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:25.702360    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:28.375770    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:28.376173    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:28.380611    2092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:55:28.380790    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:28.393307    2092 ssh_runner.go:195] Run: cat /version.json
	I0122 12:55:28.393307    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-487400 ).state
	I0122 12:55:30.701975    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:30.701975    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:30.701975    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:30.701975    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:55:30.702168    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:30.702275    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-487400 ).networkadapters[0]).ipaddresses[0]
	I0122 12:55:33.525381    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:33.525657    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:33.525978    2092 sshutil.go:53] new ssh client: &{IP:172.23.179.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\id_rsa Username:docker}
	I0122 12:55:33.539920    2092 main.go:141] libmachine: [stdout =====>] : 172.23.179.67
	
	I0122 12:55:33.541761    2092 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:55:33.542407    2092 sshutil.go:53] new ssh client: &{IP:172.23.179.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-487400\id_rsa Username:docker}
	I0122 12:55:33.625807    2092 ssh_runner.go:235] Completed: cat /version.json: (5.2324225s)
	I0122 12:55:33.639760    2092 ssh_runner.go:195] Run: systemctl --version
	I0122 12:55:33.733681    2092 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3530471s)
	I0122 12:55:33.747855    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 12:55:33.756428    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:55:33.770843    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:55:33.796170    2092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:55:33.796170    2092 start.go:475] detecting cgroup driver to use...
	I0122 12:55:33.796170    2092 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0122 12:55:33.797164    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:55:33.845854    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:55:33.880535    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:55:33.896963    2092 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0122 12:55:33.910148    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0122 12:55:33.943410    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:55:33.974830    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:55:34.007498    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:55:34.039415    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:55:34.070010    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:55:34.100198    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:55:34.130676    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:55:34.164303    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:55:34.358092    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:55:34.383388    2092 start.go:475] detecting cgroup driver to use...
	I0122 12:55:34.383388    2092 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0122 12:55:34.397036    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:55:34.431892    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:55:34.465504    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:55:34.512952    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:55:34.549379    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:55:34.581745    2092 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:55:34.633351    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:55:34.653413    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:55:34.697441    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:55:34.718625    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:55:34.733954    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:55:34.779040    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:55:34.947631    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:55:35.118571    2092 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0122 12:55:35.118800    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0122 12:55:35.167298    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:55:35.340760    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:56:36.443562    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1025341s)
	I0122 12:56:36.458889    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 12:56:36.486888    2092 out.go:177] 
	W0122 12:56:36.488504    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 12:54:07 UTC, ends at Mon 2024-01-22 12:56:36 UTC. --
	Jan 22 12:55:01 force-systemd-flag-487400 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.031061258Z" level=info msg="Starting up"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.032978776Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.036465509Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=677
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.077960906Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.108283796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.108433098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111077223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111187224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111610828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111780430Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111895831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112112533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112351135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112519537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112902241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113023342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113042742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113209443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113308544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113438046Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113474246Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125023656Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125223058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125375260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125460361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125490261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125527761Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125596362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125768564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125794664Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125812964Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125828964Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125845964Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125865565Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125898365Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125913265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125927965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125942765Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125958065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125971166Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126059966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126740073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126817374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126870874Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126905374Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126964175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127002475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127020776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127034976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127053976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127069276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127082876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127096776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127111876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127173477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127292878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127372279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127430980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127448980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127466180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127481580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127496180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127515080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127530380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127613481Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128082786Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128353788Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128599391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128628791Z" level=info msg="containerd successfully booted in 0.051849s"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.167383862Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.183752118Z" level=info msg="Loading containers: start."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.401987306Z" level=info msg="Loading containers: done."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.428862463Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.428957164Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429006165Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429055865Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429111066Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429416069Z" level=info msg="Daemon has completed initialization"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.483024282Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.483234084Z" level=info msg="API listen on [::]:2376"
	Jan 22 12:55:02 force-systemd-flag-487400 systemd[1]: Started Docker Application Container Engine.
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.360538575Z" level=info msg="Processing signal 'terminated'"
	Jan 22 12:55:35 force-systemd-flag-487400 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362324775Z" level=info msg="Daemon shutdown complete"
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362546675Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362647775Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362670375Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: docker.service: Succeeded.
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:55:36 force-systemd-flag-487400 dockerd[1008]: time="2024-01-22T12:55:36.433040575Z" level=info msg="Starting up"
	Jan 22 12:56:36 force-systemd-flag-487400 dockerd[1008]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 12:54:07 UTC, ends at Mon 2024-01-22 12:56:36 UTC. --
	Jan 22 12:55:01 force-systemd-flag-487400 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.031061258Z" level=info msg="Starting up"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.032978776Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.036465509Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=677
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.077960906Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.108283796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.108433098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111077223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111187224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111610828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111780430Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.111895831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112112533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112351135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112519537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.112902241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113023342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113042742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113209443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113308544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113438046Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.113474246Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125023656Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125223058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125375260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125460361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125490261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125527761Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125596362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125768564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125794664Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125812964Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125828964Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125845964Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125865565Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125898365Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125913265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125927965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125942765Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125958065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.125971166Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126059966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126740073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126817374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126870874Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126905374Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.126964175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127002475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127020776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127034976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127053976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127069276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127082876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127096776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127111876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127173477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127292878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127372279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127430980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127448980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127466180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127481580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127496180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127515080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127530380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.127613481Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128082786Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128353788Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128599391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[677]: time="2024-01-22T12:55:02.128628791Z" level=info msg="containerd successfully booted in 0.051849s"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.167383862Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.183752118Z" level=info msg="Loading containers: start."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.401987306Z" level=info msg="Loading containers: done."
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.428862463Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.428957164Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429006165Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429055865Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429111066Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.429416069Z" level=info msg="Daemon has completed initialization"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.483024282Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 12:55:02 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:02.483234084Z" level=info msg="API listen on [::]:2376"
	Jan 22 12:55:02 force-systemd-flag-487400 systemd[1]: Started Docker Application Container Engine.
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.360538575Z" level=info msg="Processing signal 'terminated'"
	Jan 22 12:55:35 force-systemd-flag-487400 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362324775Z" level=info msg="Daemon shutdown complete"
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362546675Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362647775Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 12:55:35 force-systemd-flag-487400 dockerd[671]: time="2024-01-22T12:55:35.362670375Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: docker.service: Succeeded.
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 12:55:36 force-systemd-flag-487400 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:55:36 force-systemd-flag-487400 dockerd[1008]: time="2024-01-22T12:55:36.433040575Z" level=info msg="Starting up"
	Jan 22 12:56:36 force-systemd-flag-487400 dockerd[1008]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 12:56:36 force-systemd-flag-487400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 12:56:36.489037    2092 out.go:239] * 
	* 
	W0122 12:56:36.490339    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 12:56:36.490901    2092 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-487400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-487400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-487400 ssh "docker info --format {{.CgroupDriver}}": (1m0.1654887s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:56:36.891568    3352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:523: *** TestForceSystemdFlag FAILED at 2024-01-22 12:57:36.9151819 +0000 UTC m=+8496.024966701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-487400 -n force-systemd-flag-487400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-487400 -n force-systemd-flag-487400: exit status 6 (12.3968446s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:57:37.040720   11108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 12:57:49.238030   11108 status.go:415] kubeconfig endpoint: extract IP: "force-systemd-flag-487400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-flag-487400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-flag-487400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-487400
E0122 12:58:23.969738    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-487400: (1m1.8250779s)
--- FAIL: TestForceSystemdFlag (482.84s)

                                                
                                    
x
+
TestErrorSpam/setup (189.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-526800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 --driver=hyperv
E0122 10:48:23.949144    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:23.964282    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:23.979476    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:24.010639    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:24.057373    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:24.151770    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:24.325435    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:24.659510    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:25.310041    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:26.597631    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:29.158334    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:34.279389    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:48:44.528644    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:49:05.013555    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 10:49:45.985393    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-526800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 --driver=hyperv: (3m9.4622248s)
error_spam_test.go:96: unexpected stderr: "W0122 10:47:49.208571     752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-526800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=18014
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-526800 in cluster nospam-526800
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-526800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0122 10:47:49.208571     752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (189.46s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (338.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-969800 --alsologtostderr -v=8
E0122 10:58:23.948066    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-969800 --alsologtostderr -v=8: exit status 90 (2m25.47802s)

                                                
                                                
-- stdout --
	* [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node functional-969800 in cluster functional-969800
	* Updating the running hyperv "functional-969800" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:57:40.400950   10176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0122 10:57:40.469886   10176 out.go:296] Setting OutFile to fd 668 ...
	I0122 10:57:40.471315   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.471315   10176 out.go:309] Setting ErrFile to fd 532...
	I0122 10:57:40.471403   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.492855   10176 out.go:303] Setting JSON to false
	I0122 10:57:40.496696   10176 start.go:128] hostinfo: {"hostname":"minikube7","uptime":40429,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:57:40.496696   10176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:57:40.498142   10176 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:57:40.498142   10176 notify.go:220] Checking for updates...
	I0122 10:57:40.499579   10176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:57:40.499736   10176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:57:40.500343   10176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:57:40.501324   10176 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:57:40.501745   10176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:57:40.502974   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:57:40.502974   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:57:45.928080   10176 out.go:177] * Using the hyperv driver based on existing profile
	I0122 10:57:45.928983   10176 start.go:298] selected driver: hyperv
	I0122 10:57:45.928983   10176 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.929543   10176 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:57:45.979518   10176 cni.go:84] Creating CNI manager for ""
	I0122 10:57:45.980100   10176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:57:45.980100   10176 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.980261   10176 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:57:45.981049   10176 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 10:57:45.982349   10176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:57:45.982349   10176 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:57:45.982349   10176 cache.go:56] Caching tarball of preloaded images
	I0122 10:57:45.983003   10176 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:57:45.983003   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:57:45.983545   10176 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 10:57:45.985896   10176 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:57:45.985896   10176 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 10:57:45.985896   10176 start.go:96] Skipping create...Using existing machine configuration
	I0122 10:57:45.985896   10176 fix.go:54] fixHost starting: 
	I0122 10:57:45.985896   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:48.714864   10176 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 10:57:48.714864   10176 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 10:57:48.716293   10176 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 10:57:48.717336   10176 machine.go:88] provisioning docker machine ...
	I0122 10:57:48.717336   10176 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 10:57:48.717336   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:50.876024   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:53.475376   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:53.476351   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:53.476351   10176 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 10:57:53.638784   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 10:57:53.638784   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:55.805671   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:58.407147   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:58.407915   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:58.407915   10176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:57:58.548972   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:57:58.548972   10176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:57:58.549601   10176 buildroot.go:174] setting up certificates
	I0122 10:57:58.549601   10176 provision.go:83] configureAuth start
	I0122 10:57:58.549601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:00.709217   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:00.709410   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:00.709520   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:03.272329   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:03.272570   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:03.272686   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:07.989209   10176 provision.go:138] copyHostCerts
	I0122 10:58:07.989628   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 10:58:07.990020   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 10:58:07.990122   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 10:58:07.990638   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:58:07.992053   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 10:58:07.992258   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 10:58:07.992345   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 10:58:07.992434   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:58:07.993591   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 10:58:07.993837   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 10:58:07.993924   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 10:58:07.994335   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:58:07.995347   10176 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 10:58:08.282789   10176 provision.go:172] copyRemoteCerts
	I0122 10:58:08.298910   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:58:08.298910   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:10.409009   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:10.409144   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:10.409236   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:12.976882   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:12.976956   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:12.977542   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:13.085280   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7863488s)
	I0122 10:58:13.085358   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 10:58:13.085992   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:58:13.126129   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 10:58:13.126556   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 10:58:13.168097   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 10:58:13.168507   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:58:13.217793   10176 provision.go:86] duration metric: configureAuth took 14.6681291s
	I0122 10:58:13.217918   10176 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:58:13.218335   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:58:13.218335   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:15.351579   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:17.930279   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:17.930493   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:17.935586   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:17.936273   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:17.936273   10176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:58:18.079216   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:58:18.079216   10176 buildroot.go:70] root file system type: tmpfs
	I0122 10:58:18.079787   10176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:58:18.079878   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:22.919127   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:22.919857   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:22.919857   10176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:58:23.085993   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:58:23.085993   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:25.254617   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:27.803581   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:27.803653   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:27.809703   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:27.810422   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:27.810422   10176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:58:27.960981   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:58:27.960981   10176 machine.go:91] provisioned docker machine in 39.2434773s
	I0122 10:58:27.960981   10176 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 10:58:27.960981   10176 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:58:27.976090   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:58:27.976090   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:30.146064   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:32.720717   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:32.720900   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:32.720966   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:32.831028   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8548008s)
	I0122 10:58:32.844266   10176 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:58:32.851138   10176 command_runner.go:130] > NAME=Buildroot
	I0122 10:58:32.851278   10176 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 10:58:32.851278   10176 command_runner.go:130] > ID=buildroot
	I0122 10:58:32.851490   10176 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 10:58:32.851490   10176 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 10:58:32.851544   10176 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:58:32.851585   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:58:32.852067   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:58:32.853202   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 10:58:32.853202   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 10:58:32.854596   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 10:58:32.854596   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> /etc/test/nested/copy/6396/hosts
	I0122 10:58:32.868168   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 10:58:32.884833   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 10:58:32.932481   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 10:58:32.976776   10176 start.go:303] post-start completed in 5.0157728s
	I0122 10:58:32.976776   10176 fix.go:56] fixHost completed within 46.9906788s
	I0122 10:58:32.976776   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:35.144017   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:37.703612   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:37.703920   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:37.710288   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:37.710854   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:37.710854   10176 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 10:58:37.850913   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705921117.851868669
	
	I0122 10:58:37.851011   10176 fix.go:206] guest clock: 1705921117.851868669
	I0122 10:58:37.851011   10176 fix.go:219] Guest: 2024-01-22 10:58:37.851868669 +0000 UTC Remote: 2024-01-22 10:58:32.9767767 +0000 UTC m=+52.675762201 (delta=4.875091969s)
	I0122 10:58:37.851175   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:40.010410   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:42.618291   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:42.618378   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:42.625417   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:42.626844   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:42.626844   10176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705921117
	I0122 10:58:42.774588   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:58:37 UTC 2024
	
	I0122 10:58:42.774675   10176 fix.go:226] clock set: Mon Jan 22 10:58:37 UTC 2024
	 (err=<nil>)
	I0122 10:58:42.774675   10176 start.go:83] releasing machines lock for "functional-969800", held for 56.7885347s
	I0122 10:58:42.774814   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:47.461274   10176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:58:47.461385   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:47.472541   10176 ssh_runner.go:195] Run: cat /version.json
	I0122 10:58:47.472541   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.705114   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:52.392500   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.392560   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.392560   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.412754   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.487973   10176 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 10:58:52.488809   10176 ssh_runner.go:235] Completed: cat /version.json: (5.0162464s)
	I0122 10:58:52.502299   10176 ssh_runner.go:195] Run: systemctl --version
	I0122 10:58:52.579182   10176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 10:58:52.579508   10176 command_runner.go:130] > systemd 247 (247)
	I0122 10:58:52.579508   10176 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 10:58:52.579508   10176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1182111s)
	I0122 10:58:52.592839   10176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 10:58:52.600589   10176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 10:58:52.601287   10176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:58:52.614022   10176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:58:52.629161   10176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 10:58:52.629512   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:52.629804   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:52.662423   10176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 10:58:52.677419   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:58:52.706434   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:58:52.723206   10176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:58:52.736346   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:58:52.766629   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.795500   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:58:52.825136   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.853412   10176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:58:52.881996   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:58:52.913498   10176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:58:52.930750   10176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 10:58:52.944733   10176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:58:52.973118   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:53.199921   10176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:58:53.229360   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:53.244205   10176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:58:53.265654   10176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Unit]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 10:58:53.265856   10176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 10:58:53.265856   10176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 10:58:53.265856   10176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitBurst=3
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Service]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Type=notify
	I0122 10:58:53.265856   10176 command_runner.go:130] > Restart=on-failure
	I0122 10:58:53.265856   10176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 10:58:53.265856   10176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 10:58:53.265856   10176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 10:58:53.265856   10176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 10:58:53.265856   10176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 10:58:53.265856   10176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 10:58:53.265856   10176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 10:58:53.265856   10176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNOFILE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNPROC=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitCORE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 10:58:53.265856   10176 command_runner.go:130] > TasksMax=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > TimeoutStartSec=0
	I0122 10:58:53.265856   10176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 10:58:53.265856   10176 command_runner.go:130] > Delegate=yes
	I0122 10:58:53.265856   10176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 10:58:53.265856   10176 command_runner.go:130] > KillMode=process
	I0122 10:58:53.266496   10176 command_runner.go:130] > [Install]
	I0122 10:58:53.266496   10176 command_runner.go:130] > WantedBy=multi-user.target
	I0122 10:58:53.282083   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.311447   10176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:58:53.359743   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.396608   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:58:53.415523   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:53.444264   10176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 10:58:53.456104   10176 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:58:53.461931   10176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 10:58:53.476026   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:58:53.490730   10176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:58:53.529862   10176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:58:53.744013   10176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:58:53.975729   10176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:58:53.975983   10176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:58:54.028787   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:54.249447   10176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:00:05.628463   10176 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0122 11:00:05.628515   10176 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0122 11:00:05.630958   10176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3811974s)
	I0122 11:00:05.644549   10176 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:00:05.669673   10176 command_runner.go:130] > -- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670523   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670820   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671253   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.672385   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.672460   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.672574   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672725   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672833   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672888   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.672929   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.675992   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.676564   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676594   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676653   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677410   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678852   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678933   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	I0122 11:00:05.703571   10176 out.go:177] 
	W0122 11:00:05.704224   10176 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:00:05.705403   10176 out.go:239] * 
	* 
	W0122 11:00:05.706768   10176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:00:05.707971   10176 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-969800 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m26.0411237s for "functional-969800" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (12.0525062s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:00:06.447034    7808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (2m48.0968526s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-760600 ip                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:46 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-760600                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:46 UTC |
	| addons  | enable dashboard -p                                                   | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-760600                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	| start   | -p nospam-526800 -n=1 --memory=2250 --wait=false                      | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:50 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:50 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                                      | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:57:40
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:57:40.469886   10176 out.go:296] Setting OutFile to fd 668 ...
	I0122 10:57:40.471315   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.471315   10176 out.go:309] Setting ErrFile to fd 532...
	I0122 10:57:40.471403   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.492855   10176 out.go:303] Setting JSON to false
	I0122 10:57:40.496696   10176 start.go:128] hostinfo: {"hostname":"minikube7","uptime":40429,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:57:40.496696   10176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:57:40.498142   10176 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:57:40.498142   10176 notify.go:220] Checking for updates...
	I0122 10:57:40.499579   10176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:57:40.499736   10176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:57:40.500343   10176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:57:40.501324   10176 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:57:40.501745   10176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:57:40.502974   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:57:40.502974   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:57:45.928080   10176 out.go:177] * Using the hyperv driver based on existing profile
	I0122 10:57:45.928983   10176 start.go:298] selected driver: hyperv
	I0122 10:57:45.928983   10176 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.929543   10176 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:57:45.979518   10176 cni.go:84] Creating CNI manager for ""
	I0122 10:57:45.980100   10176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:57:45.980100   10176 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.980261   10176 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:57:45.981049   10176 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 10:57:45.982349   10176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:57:45.982349   10176 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:57:45.982349   10176 cache.go:56] Caching tarball of preloaded images
	I0122 10:57:45.983003   10176 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:57:45.983003   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:57:45.983545   10176 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 10:57:45.985896   10176 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:57:45.985896   10176 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 10:57:45.985896   10176 start.go:96] Skipping create...Using existing machine configuration
	I0122 10:57:45.985896   10176 fix.go:54] fixHost starting: 
	I0122 10:57:45.985896   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:48.714864   10176 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 10:57:48.714864   10176 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 10:57:48.716293   10176 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 10:57:48.717336   10176 machine.go:88] provisioning docker machine ...
	I0122 10:57:48.717336   10176 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 10:57:48.717336   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:50.876024   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:53.475376   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:53.476351   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:53.476351   10176 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 10:57:53.638784   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 10:57:53.638784   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:55.805671   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:58.407147   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:58.407915   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:58.407915   10176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:57:58.548972   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:57:58.548972   10176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:57:58.549601   10176 buildroot.go:174] setting up certificates
	I0122 10:57:58.549601   10176 provision.go:83] configureAuth start
	I0122 10:57:58.549601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:00.709217   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:00.709410   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:00.709520   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:03.272329   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:03.272570   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:03.272686   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:07.989209   10176 provision.go:138] copyHostCerts
	I0122 10:58:07.989628   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 10:58:07.990020   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 10:58:07.990122   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 10:58:07.990638   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:58:07.992053   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 10:58:07.992258   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 10:58:07.992345   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 10:58:07.992434   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:58:07.993591   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 10:58:07.993837   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 10:58:07.993924   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 10:58:07.994335   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:58:07.995347   10176 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 10:58:08.282789   10176 provision.go:172] copyRemoteCerts
	I0122 10:58:08.298910   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:58:08.298910   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:10.409009   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:10.409144   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:10.409236   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:12.976882   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:12.976956   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:12.977542   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:13.085280   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7863488s)
	I0122 10:58:13.085358   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 10:58:13.085992   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:58:13.126129   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 10:58:13.126556   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 10:58:13.168097   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 10:58:13.168507   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:58:13.217793   10176 provision.go:86] duration metric: configureAuth took 14.6681291s
	I0122 10:58:13.217918   10176 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:58:13.218335   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:58:13.218335   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:15.351579   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:17.930279   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:17.930493   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:17.935586   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:17.936273   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:17.936273   10176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:58:18.079216   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:58:18.079216   10176 buildroot.go:70] root file system type: tmpfs
	I0122 10:58:18.079787   10176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:58:18.079878   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:22.919127   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:22.919857   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:22.919857   10176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:58:23.085993   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:58:23.085993   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:25.254617   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:27.803581   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:27.803653   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:27.809703   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:27.810422   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:27.810422   10176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:58:27.960981   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:58:27.960981   10176 machine.go:91] provisioned docker machine in 39.2434773s
	I0122 10:58:27.960981   10176 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 10:58:27.960981   10176 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:58:27.976090   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:58:27.976090   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:30.146064   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:32.720717   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:32.720900   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:32.720966   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:32.831028   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8548008s)
	I0122 10:58:32.844266   10176 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:58:32.851138   10176 command_runner.go:130] > NAME=Buildroot
	I0122 10:58:32.851278   10176 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 10:58:32.851278   10176 command_runner.go:130] > ID=buildroot
	I0122 10:58:32.851490   10176 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 10:58:32.851490   10176 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 10:58:32.851544   10176 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:58:32.851585   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:58:32.852067   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:58:32.853202   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 10:58:32.853202   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 10:58:32.854596   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 10:58:32.854596   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> /etc/test/nested/copy/6396/hosts
	I0122 10:58:32.868168   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 10:58:32.884833   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 10:58:32.932481   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 10:58:32.976776   10176 start.go:303] post-start completed in 5.0157728s
	I0122 10:58:32.976776   10176 fix.go:56] fixHost completed within 46.9906788s
	I0122 10:58:32.976776   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:35.144017   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:37.703612   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:37.703920   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:37.710288   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:37.710854   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:37.710854   10176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 10:58:37.850913   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705921117.851868669
	
	I0122 10:58:37.851011   10176 fix.go:206] guest clock: 1705921117.851868669
	I0122 10:58:37.851011   10176 fix.go:219] Guest: 2024-01-22 10:58:37.851868669 +0000 UTC Remote: 2024-01-22 10:58:32.9767767 +0000 UTC m=+52.675762201 (delta=4.875091969s)
	I0122 10:58:37.851175   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:40.010410   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:42.618291   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:42.618378   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:42.625417   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:42.626844   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:42.626844   10176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705921117
	I0122 10:58:42.774588   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:58:37 UTC 2024
	
	I0122 10:58:42.774675   10176 fix.go:226] clock set: Mon Jan 22 10:58:37 UTC 2024
	 (err=<nil>)
	I0122 10:58:42.774675   10176 start.go:83] releasing machines lock for "functional-969800", held for 56.7885347s
	I0122 10:58:42.774814   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:47.461274   10176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:58:47.461385   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:47.472541   10176 ssh_runner.go:195] Run: cat /version.json
	I0122 10:58:47.472541   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.705114   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:52.392500   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.392560   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.392560   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.412754   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.487973   10176 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 10:58:52.488809   10176 ssh_runner.go:235] Completed: cat /version.json: (5.0162464s)
	I0122 10:58:52.502299   10176 ssh_runner.go:195] Run: systemctl --version
	I0122 10:58:52.579182   10176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 10:58:52.579508   10176 command_runner.go:130] > systemd 247 (247)
	I0122 10:58:52.579508   10176 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 10:58:52.579508   10176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1182111s)
	I0122 10:58:52.592839   10176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 10:58:52.600589   10176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 10:58:52.601287   10176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:58:52.614022   10176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:58:52.629161   10176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 10:58:52.629512   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:52.629804   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:52.662423   10176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 10:58:52.677419   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:58:52.706434   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:58:52.723206   10176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:58:52.736346   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:58:52.766629   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.795500   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:58:52.825136   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.853412   10176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:58:52.881996   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:58:52.913498   10176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:58:52.930750   10176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 10:58:52.944733   10176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:58:52.973118   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:53.199921   10176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:58:53.229360   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:53.244205   10176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:58:53.265654   10176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Unit]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 10:58:53.265856   10176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 10:58:53.265856   10176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 10:58:53.265856   10176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitBurst=3
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Service]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Type=notify
	I0122 10:58:53.265856   10176 command_runner.go:130] > Restart=on-failure
	I0122 10:58:53.265856   10176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 10:58:53.265856   10176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 10:58:53.265856   10176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 10:58:53.265856   10176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 10:58:53.265856   10176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 10:58:53.265856   10176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 10:58:53.265856   10176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 10:58:53.265856   10176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNOFILE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNPROC=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitCORE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 10:58:53.265856   10176 command_runner.go:130] > TasksMax=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > TimeoutStartSec=0
	I0122 10:58:53.265856   10176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 10:58:53.265856   10176 command_runner.go:130] > Delegate=yes
	I0122 10:58:53.265856   10176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 10:58:53.265856   10176 command_runner.go:130] > KillMode=process
	I0122 10:58:53.266496   10176 command_runner.go:130] > [Install]
	I0122 10:58:53.266496   10176 command_runner.go:130] > WantedBy=multi-user.target
	I0122 10:58:53.282083   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.311447   10176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:58:53.359743   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.396608   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:58:53.415523   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:53.444264   10176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 10:58:53.456104   10176 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:58:53.461931   10176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 10:58:53.476026   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:58:53.490730   10176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:58:53.529862   10176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:58:53.744013   10176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:58:53.975729   10176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:58:53.975983   10176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:58:54.028787   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:54.249447   10176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:00:05.628463   10176 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0122 11:00:05.628515   10176 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0122 11:00:05.630958   10176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3811974s)
	I0122 11:00:05.644549   10176 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:00:05.669673   10176 command_runner.go:130] > -- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670523   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670820   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671253   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.672385   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.672460   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.672574   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672725   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672833   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672888   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.672929   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.675992   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.676564   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676594   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676653   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677410   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678852   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678933   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	I0122 11:00:05.703571   10176 out.go:177] 
	W0122 11:00:05.704224   10176 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:00:05.705403   10176 out.go:239] * 
	W0122 11:00:05.706768   10176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:00:05.707971   10176 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:02:06 UTC. --
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:00:05Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 22 11:00:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:00:05 functional-969800 dockerd[5820]: time="2024-01-22T11:00:05.817795905Z" level=info msg="Starting up"
	Jan 22 11:01:05 functional-969800 dockerd[5820]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:01:05 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:01:05Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:01:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:01:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:01:06 functional-969800 dockerd[6034]: time="2024-01-22T11:01:06.042578101Z" level=info msg="Starting up"
	Jan 22 11:02:06 functional-969800 dockerd[6034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:02:06Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:02:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:02:06Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.735107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:03:06 up 8 min,  0 users,  load average: 0.03, 0.27, 0.19
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:03:06 UTC. --
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.837937    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.838418    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.838728    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.839536    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.840002    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:02:59 functional-969800 kubelet[2697]: E0122 11:02:59.840091    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:03:01 functional-969800 kubelet[2697]: E0122 11:03:01.142395    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:03:03 functional-969800 kubelet[2697]: E0122 11:03:03.580102    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m9.66336523s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:03:04 functional-969800 kubelet[2697]: E0122 11:03:04.726435    2697 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-969800.17aca63cd10f7bc8", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-969800", UID:"eec8edf9285f620f8bcb544dccfdc7a0", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventS
ource{Component:"kubelet", Host:"functional-969800"}, FirstTimestamp:time.Date(2024, time.January, 22, 10, 58, 59, 2276808, time.Local), LastTimestamp:time.Date(2024, time.January, 22, 10, 58, 59, 2276808, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-969800"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 172.23.182.59:8441: connect: connection refused'(may retry after sleeping)
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.266025    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.269688    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.269078    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.269852    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: I0122 11:03:06.269937    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.269153    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.270629    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.269379    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.271130    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.271162    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272408    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272507    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272045    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272629    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272871    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jan 22 11:03:06 functional-969800 kubelet[2697]: E0122 11:03:06.272125    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:00:18.493067   13744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:01:05.826917   13744 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:01:05.859072   13744 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:01:05.891987   13744 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:01:05.925972   13744 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:01:05.959231   13744 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:02:06.052016   13744 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:02:06.089064   13744 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:02:06.128175   13744 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:03:06.263446   13744 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:02:08Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:02:08Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:03:06.366185   13744 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.1728861s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:03:07.096132    2176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (338.86s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (120.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-969800 get po -A
E0122 11:03:23.954313    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-969800 get po -A: exit status 1 (10.3815032s)

                                                
                                                
** stderr ** 
	E0122 11:03:21.465199    6580 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:03:23.562692    6580 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:03:25.584844    6580 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:03:27.619597    6580 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:03:29.677618    6580 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-969800 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0122 11:03:21.465199    6580 memcache.go:265] couldn't get current server API group list: Get \"https://172.23.182.59:8441/api?timeout=32s\": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.\nE0122 11:03:23.562692    6580 memcache.go:265] couldn't get current server API group list: Get \"https://172.23.182.59:8441/api?timeout=32s\": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.\nE0122 11:03:25.584844    6580 memcache.go:265] couldn't get current server API group list: Get \"https://172.23.182.59:8441/api?timeout=32s\": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.\nE0122 11:03:27.619597    6580 memcache.go:265] couldn't get current server API group list: Get \"https://172.23.182.59:8441/api?timeout=32s\": dial tcp 172.23.182.59:8441: connec
tex: No connection could be made because the target machine actively refused it.\nE0122 11:03:29.677618    6580 memcache.go:265] couldn't get current server API group list: Get \"https://172.23.182.59:8441/api?timeout=32s\": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-969800 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-969800 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (11.9363404s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:03:29.823562    5708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
E0122 11:04:47.134460    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (1m25.4293397s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-760600 ip                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-760600 addons disable                                          | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:46 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-760600                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:46 UTC |
	| addons  | enable dashboard -p                                                   | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	|         | addons-760600                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-760600                                                      | addons-760600     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
	| start   | -p nospam-526800 -n=1 --memory=2250 --wait=false                      | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:50 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:50 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                               | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                                      | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:57:40
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:57:40.469886   10176 out.go:296] Setting OutFile to fd 668 ...
	I0122 10:57:40.471315   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.471315   10176 out.go:309] Setting ErrFile to fd 532...
	I0122 10:57:40.471403   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.492855   10176 out.go:303] Setting JSON to false
	I0122 10:57:40.496696   10176 start.go:128] hostinfo: {"hostname":"minikube7","uptime":40429,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:57:40.496696   10176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:57:40.498142   10176 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:57:40.498142   10176 notify.go:220] Checking for updates...
	I0122 10:57:40.499579   10176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:57:40.499736   10176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:57:40.500343   10176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:57:40.501324   10176 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:57:40.501745   10176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:57:40.502974   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:57:40.502974   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:57:45.928080   10176 out.go:177] * Using the hyperv driver based on existing profile
	I0122 10:57:45.928983   10176 start.go:298] selected driver: hyperv
	I0122 10:57:45.928983   10176 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.929543   10176 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:57:45.979518   10176 cni.go:84] Creating CNI manager for ""
	I0122 10:57:45.980100   10176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:57:45.980100   10176 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.980261   10176 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:57:45.981049   10176 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 10:57:45.982349   10176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:57:45.982349   10176 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:57:45.982349   10176 cache.go:56] Caching tarball of preloaded images
	I0122 10:57:45.983003   10176 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:57:45.983003   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:57:45.983545   10176 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 10:57:45.985896   10176 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:57:45.985896   10176 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 10:57:45.985896   10176 start.go:96] Skipping create...Using existing machine configuration
	I0122 10:57:45.985896   10176 fix.go:54] fixHost starting: 
	I0122 10:57:45.985896   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:48.714864   10176 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 10:57:48.714864   10176 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 10:57:48.716293   10176 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 10:57:48.717336   10176 machine.go:88] provisioning docker machine ...
	I0122 10:57:48.717336   10176 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 10:57:48.717336   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:50.876024   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:53.475376   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:53.476351   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:53.476351   10176 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 10:57:53.638784   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 10:57:53.638784   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:55.805671   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:58.407147   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:58.407915   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:58.407915   10176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:57:58.548972   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:57:58.548972   10176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:57:58.549601   10176 buildroot.go:174] setting up certificates
	I0122 10:57:58.549601   10176 provision.go:83] configureAuth start
	I0122 10:57:58.549601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:00.709217   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:00.709410   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:00.709520   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:03.272329   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:03.272570   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:03.272686   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:07.989209   10176 provision.go:138] copyHostCerts
	I0122 10:58:07.989628   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 10:58:07.990020   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 10:58:07.990122   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 10:58:07.990638   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:58:07.992053   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 10:58:07.992258   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 10:58:07.992345   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 10:58:07.992434   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:58:07.993591   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 10:58:07.993837   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 10:58:07.993924   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 10:58:07.994335   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:58:07.995347   10176 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 10:58:08.282789   10176 provision.go:172] copyRemoteCerts
	I0122 10:58:08.298910   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:58:08.298910   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:10.409009   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:10.409144   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:10.409236   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:12.976882   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:12.976956   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:12.977542   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:13.085280   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7863488s)
	I0122 10:58:13.085358   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 10:58:13.085992   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:58:13.126129   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 10:58:13.126556   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 10:58:13.168097   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 10:58:13.168507   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:58:13.217793   10176 provision.go:86] duration metric: configureAuth took 14.6681291s
	I0122 10:58:13.217918   10176 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:58:13.218335   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:58:13.218335   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:15.351579   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:17.930279   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:17.930493   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:17.935586   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:17.936273   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:17.936273   10176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:58:18.079216   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:58:18.079216   10176 buildroot.go:70] root file system type: tmpfs
	I0122 10:58:18.079787   10176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:58:18.079878   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:22.919127   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:22.919857   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:22.919857   10176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:58:23.085993   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:58:23.085993   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:25.254617   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:27.803581   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:27.803653   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:27.809703   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:27.810422   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:27.810422   10176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:58:27.960981   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:58:27.960981   10176 machine.go:91] provisioned docker machine in 39.2434773s
	I0122 10:58:27.960981   10176 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 10:58:27.960981   10176 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:58:27.976090   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:58:27.976090   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:30.146064   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:32.720717   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:32.720900   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:32.720966   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:32.831028   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8548008s)
	I0122 10:58:32.844266   10176 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:58:32.851138   10176 command_runner.go:130] > NAME=Buildroot
	I0122 10:58:32.851278   10176 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 10:58:32.851278   10176 command_runner.go:130] > ID=buildroot
	I0122 10:58:32.851490   10176 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 10:58:32.851490   10176 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 10:58:32.851544   10176 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:58:32.851585   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:58:32.852067   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:58:32.853202   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 10:58:32.853202   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 10:58:32.854596   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 10:58:32.854596   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> /etc/test/nested/copy/6396/hosts
	I0122 10:58:32.868168   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 10:58:32.884833   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 10:58:32.932481   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 10:58:32.976776   10176 start.go:303] post-start completed in 5.0157728s
	I0122 10:58:32.976776   10176 fix.go:56] fixHost completed within 46.9906788s
	I0122 10:58:32.976776   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:35.144017   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:37.703612   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:37.703920   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:37.710288   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:37.710854   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:37.710854   10176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 10:58:37.850913   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705921117.851868669
	
	I0122 10:58:37.851011   10176 fix.go:206] guest clock: 1705921117.851868669
	I0122 10:58:37.851011   10176 fix.go:219] Guest: 2024-01-22 10:58:37.851868669 +0000 UTC Remote: 2024-01-22 10:58:32.9767767 +0000 UTC m=+52.675762201 (delta=4.875091969s)
	I0122 10:58:37.851175   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:40.010410   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:42.618291   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:42.618378   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:42.625417   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:42.626844   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:42.626844   10176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705921117
	I0122 10:58:42.774588   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:58:37 UTC 2024
	
	I0122 10:58:42.774675   10176 fix.go:226] clock set: Mon Jan 22 10:58:37 UTC 2024
	 (err=<nil>)
	I0122 10:58:42.774675   10176 start.go:83] releasing machines lock for "functional-969800", held for 56.7885347s
	I0122 10:58:42.774814   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:47.461274   10176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:58:47.461385   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:47.472541   10176 ssh_runner.go:195] Run: cat /version.json
	I0122 10:58:47.472541   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.705114   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:52.392500   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.392560   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.392560   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.412754   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.487973   10176 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 10:58:52.488809   10176 ssh_runner.go:235] Completed: cat /version.json: (5.0162464s)
	I0122 10:58:52.502299   10176 ssh_runner.go:195] Run: systemctl --version
	I0122 10:58:52.579182   10176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 10:58:52.579508   10176 command_runner.go:130] > systemd 247 (247)
	I0122 10:58:52.579508   10176 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 10:58:52.579508   10176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1182111s)
	I0122 10:58:52.592839   10176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 10:58:52.600589   10176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 10:58:52.601287   10176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:58:52.614022   10176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:58:52.629161   10176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 10:58:52.629512   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:52.629804   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:52.662423   10176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 10:58:52.677419   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:58:52.706434   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:58:52.723206   10176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:58:52.736346   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:58:52.766629   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.795500   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:58:52.825136   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.853412   10176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:58:52.881996   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:58:52.913498   10176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:58:52.930750   10176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 10:58:52.944733   10176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:58:52.973118   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:53.199921   10176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:58:53.229360   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:53.244205   10176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:58:53.265654   10176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Unit]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 10:58:53.265856   10176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 10:58:53.265856   10176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 10:58:53.265856   10176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitBurst=3
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Service]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Type=notify
	I0122 10:58:53.265856   10176 command_runner.go:130] > Restart=on-failure
	I0122 10:58:53.265856   10176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 10:58:53.265856   10176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 10:58:53.265856   10176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 10:58:53.265856   10176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 10:58:53.265856   10176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 10:58:53.265856   10176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 10:58:53.265856   10176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 10:58:53.265856   10176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNOFILE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNPROC=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitCORE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 10:58:53.265856   10176 command_runner.go:130] > TasksMax=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > TimeoutStartSec=0
	I0122 10:58:53.265856   10176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 10:58:53.265856   10176 command_runner.go:130] > Delegate=yes
	I0122 10:58:53.265856   10176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 10:58:53.265856   10176 command_runner.go:130] > KillMode=process
	I0122 10:58:53.266496   10176 command_runner.go:130] > [Install]
	I0122 10:58:53.266496   10176 command_runner.go:130] > WantedBy=multi-user.target
	I0122 10:58:53.282083   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.311447   10176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:58:53.359743   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.396608   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:58:53.415523   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:53.444264   10176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 10:58:53.456104   10176 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:58:53.461931   10176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 10:58:53.476026   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:58:53.490730   10176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:58:53.529862   10176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:58:53.744013   10176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:58:53.975729   10176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:58:53.975983   10176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:58:54.028787   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:54.249447   10176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:00:05.628463   10176 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0122 11:00:05.628515   10176 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0122 11:00:05.630958   10176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3811974s)
	I0122 11:00:05.644549   10176 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:00:05.669673   10176 command_runner.go:130] > -- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670523   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670820   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671253   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.672385   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.672460   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.672574   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672725   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672833   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672888   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.672929   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.675992   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.676564   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676594   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676653   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677410   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678852   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678933   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	I0122 11:00:05.703571   10176 out.go:177] 
	W0122 11:00:05.704224   10176 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:00:05.705403   10176 out.go:239] * 
	W0122 11:00:05.706768   10176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:00:05.707971   10176 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:04:06 UTC. --
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:02:06Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:02:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:02:06Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jan 22 11:02:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:02:06 functional-969800 dockerd[6195]: time="2024-01-22T11:02:06.252465592Z" level=info msg="Starting up"
	Jan 22 11:03:06 functional-969800 dockerd[6195]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:03:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:03:06Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jan 22 11:03:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:03:06 functional-969800 dockerd[6342]: time="2024-01-22T11:03:06.467155089Z" level=info msg="Starting up"
	Jan 22 11:04:06 functional-969800 dockerd[6342]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:04:06 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:04:06Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:04:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jan 22 11:04:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.735107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:05:06 up 10 min,  0 users,  load average: 0.00, 0.17, 0.16
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:05:07 UTC. --
	Jan 22 11:04:59 functional-969800 kubelet[2697]: E0122 11:04:59.898031    2697 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-969800.17aca63cf24c0391", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"functional-969800", UID:"functional-969800", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeNotReady", Message:"Node functional-969800 status is now: NodeNotReady", Source:v1.EventSource{Component:"kubelet", Host:"functional-969800"}, FirstTimestamp:time.Date(2024, time.January, 2
2, 10, 58, 59, 559891857, time.Local), LastTimestamp:time.Date(2024, time.January, 22, 10, 58, 59, 559891857, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-969800"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/default/events": dial tcp 172.23.182.59:8441: connect: connection refused'(may retry after sleeping)
	Jan 22 11:05:00 functional-969800 kubelet[2697]: E0122 11:05:00.170687    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.792483    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.793010    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.793371    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.793925    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.794441    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:05:02 functional-969800 kubelet[2697]: E0122 11:05:02.794557    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:05:03 functional-969800 kubelet[2697]: E0122 11:05:03.602455    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m9.685725462s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.816787    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.817318    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: I0122 11:05:06.817392    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.817707    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.817914    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.817943    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818115    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818192    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818334    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818360    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818898    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.819019    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.818915    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.819848    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.820036    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:05:06 functional-969800 kubelet[2697]: E0122 11:05:06.821758    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:03:41.747855   13912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:04:06.477286   13912 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.508932   13912 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.541777   13912 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.577412   13912 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.610756   13912 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.642462   13912 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.675299   13912 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:04:06.706342   13912 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:05:06.817141   13912 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:04:08Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:04:08Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:05:06.919589   13912 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.044294s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:05:07.633174   12960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (120.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl images: exit status 1 (11.3657977s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:12:09.836308   11652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:12:09.836308   11652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (47.7730385s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:12:21.196694    5020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-969800 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.4982069s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:13:08.980540    8376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache reload
E0122 11:13:23.948184    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 cache reload: (1m49.0258288s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.4129473s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:15:09.498354    5296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-969800 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.71s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (120.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 kubectl -- --context functional-969800 get pods
E0122 11:18:23.948523    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 kubectl -- --context functional-969800 get pods: exit status 1 (10.7177479s)

                                                
                                                
** stderr ** 
	W0122 11:18:22.979836   10812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:18:25.320849    5836 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:18:27.441097    5836 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:18:29.484618    5836 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:18:31.512027    5836 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:18:33.531021    5836 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-969800 kubectl -- --context functional-969800 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (11.9286163s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:18:33.692385    3672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (1m25.2912554s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                            | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:05 UTC | 22 Jan 24 11:07 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:07 UTC | 22 Jan 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:09 UTC | 22 Jan 24 11:11 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:11 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                 |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache delete                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	| ssh     | functional-969800 ssh sudo                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-969800                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-969800 ssh                                       | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache reload                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC | 22 Jan 24 11:15 UTC |
	| ssh     | functional-969800 ssh                                       | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-969800 kubectl --                                | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:18 UTC |                     |
	|         | --context functional-969800                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:57:40
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:57:40.469886   10176 out.go:296] Setting OutFile to fd 668 ...
	I0122 10:57:40.471315   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.471315   10176 out.go:309] Setting ErrFile to fd 532...
	I0122 10:57:40.471403   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.492855   10176 out.go:303] Setting JSON to false
	I0122 10:57:40.496696   10176 start.go:128] hostinfo: {"hostname":"minikube7","uptime":40429,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:57:40.496696   10176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:57:40.498142   10176 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:57:40.498142   10176 notify.go:220] Checking for updates...
	I0122 10:57:40.499579   10176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:57:40.499736   10176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:57:40.500343   10176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:57:40.501324   10176 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:57:40.501745   10176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:57:40.502974   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:57:40.502974   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:57:45.928080   10176 out.go:177] * Using the hyperv driver based on existing profile
	I0122 10:57:45.928983   10176 start.go:298] selected driver: hyperv
	I0122 10:57:45.928983   10176 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.929543   10176 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:57:45.979518   10176 cni.go:84] Creating CNI manager for ""
	I0122 10:57:45.980100   10176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:57:45.980100   10176 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.980261   10176 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:57:45.981049   10176 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 10:57:45.982349   10176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:57:45.982349   10176 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:57:45.982349   10176 cache.go:56] Caching tarball of preloaded images
	I0122 10:57:45.983003   10176 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:57:45.983003   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:57:45.983545   10176 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 10:57:45.985896   10176 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:57:45.985896   10176 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 10:57:45.985896   10176 start.go:96] Skipping create...Using existing machine configuration
	I0122 10:57:45.985896   10176 fix.go:54] fixHost starting: 
	I0122 10:57:45.985896   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:48.714864   10176 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 10:57:48.714864   10176 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 10:57:48.716293   10176 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 10:57:48.717336   10176 machine.go:88] provisioning docker machine ...
	I0122 10:57:48.717336   10176 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 10:57:48.717336   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:50.876024   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:53.475376   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:53.476351   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:53.476351   10176 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 10:57:53.638784   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 10:57:53.638784   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:55.805671   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:58.407147   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:58.407915   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:58.407915   10176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:57:58.548972   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:57:58.548972   10176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:57:58.549601   10176 buildroot.go:174] setting up certificates
	I0122 10:57:58.549601   10176 provision.go:83] configureAuth start
	I0122 10:57:58.549601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:00.709217   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:00.709410   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:00.709520   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:03.272329   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:03.272570   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:03.272686   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:07.989209   10176 provision.go:138] copyHostCerts
	I0122 10:58:07.989628   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 10:58:07.990020   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 10:58:07.990122   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 10:58:07.990638   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:58:07.992053   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 10:58:07.992258   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 10:58:07.992345   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 10:58:07.992434   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:58:07.993591   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 10:58:07.993837   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 10:58:07.993924   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 10:58:07.994335   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:58:07.995347   10176 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 10:58:08.282789   10176 provision.go:172] copyRemoteCerts
	I0122 10:58:08.298910   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:58:08.298910   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:10.409009   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:10.409144   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:10.409236   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:12.976882   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:12.976956   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:12.977542   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:13.085280   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7863488s)
	I0122 10:58:13.085358   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 10:58:13.085992   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:58:13.126129   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 10:58:13.126556   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 10:58:13.168097   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 10:58:13.168507   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:58:13.217793   10176 provision.go:86] duration metric: configureAuth took 14.6681291s
	I0122 10:58:13.217918   10176 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:58:13.218335   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:58:13.218335   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:15.351579   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:17.930279   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:17.930493   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:17.935586   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:17.936273   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:17.936273   10176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:58:18.079216   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:58:18.079216   10176 buildroot.go:70] root file system type: tmpfs
	I0122 10:58:18.079787   10176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:58:18.079878   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:22.919127   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:22.919857   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:22.919857   10176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:58:23.085993   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:58:23.085993   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:25.254617   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:27.803581   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:27.803653   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:27.809703   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:27.810422   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:27.810422   10176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:58:27.960981   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:58:27.960981   10176 machine.go:91] provisioned docker machine in 39.2434773s
	I0122 10:58:27.960981   10176 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 10:58:27.960981   10176 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:58:27.976090   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:58:27.976090   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:30.146064   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:32.720717   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:32.720900   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:32.720966   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:32.831028   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8548008s)
	I0122 10:58:32.844266   10176 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:58:32.851138   10176 command_runner.go:130] > NAME=Buildroot
	I0122 10:58:32.851278   10176 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 10:58:32.851278   10176 command_runner.go:130] > ID=buildroot
	I0122 10:58:32.851490   10176 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 10:58:32.851490   10176 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 10:58:32.851544   10176 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:58:32.851585   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:58:32.852067   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:58:32.853202   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 10:58:32.853202   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 10:58:32.854596   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 10:58:32.854596   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> /etc/test/nested/copy/6396/hosts
	I0122 10:58:32.868168   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 10:58:32.884833   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 10:58:32.932481   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 10:58:32.976776   10176 start.go:303] post-start completed in 5.0157728s
	I0122 10:58:32.976776   10176 fix.go:56] fixHost completed within 46.9906788s
	I0122 10:58:32.976776   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:35.144017   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:37.703612   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:37.703920   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:37.710288   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:37.710854   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:37.710854   10176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 10:58:37.850913   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705921117.851868669
	
	I0122 10:58:37.851011   10176 fix.go:206] guest clock: 1705921117.851868669
	I0122 10:58:37.851011   10176 fix.go:219] Guest: 2024-01-22 10:58:37.851868669 +0000 UTC Remote: 2024-01-22 10:58:32.9767767 +0000 UTC m=+52.675762201 (delta=4.875091969s)
	I0122 10:58:37.851175   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:40.010410   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:42.618291   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:42.618378   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:42.625417   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:42.626844   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:42.626844   10176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705921117
	I0122 10:58:42.774588   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:58:37 UTC 2024
	
	I0122 10:58:42.774675   10176 fix.go:226] clock set: Mon Jan 22 10:58:37 UTC 2024
	 (err=<nil>)
	I0122 10:58:42.774675   10176 start.go:83] releasing machines lock for "functional-969800", held for 56.7885347s
	I0122 10:58:42.774814   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:47.461274   10176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:58:47.461385   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:47.472541   10176 ssh_runner.go:195] Run: cat /version.json
	I0122 10:58:47.472541   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.705114   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:52.392500   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.392560   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.392560   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.412754   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.487973   10176 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 10:58:52.488809   10176 ssh_runner.go:235] Completed: cat /version.json: (5.0162464s)
	I0122 10:58:52.502299   10176 ssh_runner.go:195] Run: systemctl --version
	I0122 10:58:52.579182   10176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 10:58:52.579508   10176 command_runner.go:130] > systemd 247 (247)
	I0122 10:58:52.579508   10176 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 10:58:52.579508   10176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1182111s)
	I0122 10:58:52.592839   10176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 10:58:52.600589   10176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 10:58:52.601287   10176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:58:52.614022   10176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:58:52.629161   10176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 10:58:52.629512   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:52.629804   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:52.662423   10176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 10:58:52.677419   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:58:52.706434   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:58:52.723206   10176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:58:52.736346   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:58:52.766629   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.795500   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:58:52.825136   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.853412   10176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:58:52.881996   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:58:52.913498   10176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:58:52.930750   10176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 10:58:52.944733   10176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:58:52.973118   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:53.199921   10176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:58:53.229360   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:53.244205   10176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:58:53.265654   10176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Unit]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 10:58:53.265856   10176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 10:58:53.265856   10176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 10:58:53.265856   10176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitBurst=3
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Service]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Type=notify
	I0122 10:58:53.265856   10176 command_runner.go:130] > Restart=on-failure
	I0122 10:58:53.265856   10176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 10:58:53.265856   10176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 10:58:53.265856   10176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 10:58:53.265856   10176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 10:58:53.265856   10176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 10:58:53.265856   10176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 10:58:53.265856   10176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 10:58:53.265856   10176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNOFILE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNPROC=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitCORE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 10:58:53.265856   10176 command_runner.go:130] > TasksMax=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > TimeoutStartSec=0
	I0122 10:58:53.265856   10176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 10:58:53.265856   10176 command_runner.go:130] > Delegate=yes
	I0122 10:58:53.265856   10176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 10:58:53.265856   10176 command_runner.go:130] > KillMode=process
	I0122 10:58:53.266496   10176 command_runner.go:130] > [Install]
	I0122 10:58:53.266496   10176 command_runner.go:130] > WantedBy=multi-user.target
	I0122 10:58:53.282083   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.311447   10176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:58:53.359743   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.396608   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:58:53.415523   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:53.444264   10176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 10:58:53.456104   10176 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:58:53.461931   10176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 10:58:53.476026   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:58:53.490730   10176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:58:53.529862   10176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:58:53.744013   10176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:58:53.975729   10176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:58:53.975983   10176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:58:54.028787   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:54.249447   10176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:00:05.628463   10176 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0122 11:00:05.628515   10176 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0122 11:00:05.630958   10176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3811974s)
	I0122 11:00:05.644549   10176 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:00:05.669673   10176 command_runner.go:130] > -- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670523   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670820   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671253   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.672385   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.672460   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.672574   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672725   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672833   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672888   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.672929   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.675992   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.676564   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676594   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676653   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677410   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678852   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678933   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	I0122 11:00:05.703571   10176 out.go:177] 
	W0122 11:00:05.704224   10176 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:00:05.705403   10176 out.go:239] * 
	W0122 11:00:05.706768   10176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:00:05.707971   10176 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:19:10 UTC. --
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:17:09 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:17:09Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:17:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jan 22 11:17:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:17:09 functional-969800 dockerd[8816]: time="2024-01-22T11:17:09.993681624Z" level=info msg="Starting up"
	Jan 22 11:18:10 functional-969800 dockerd[8816]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:18:10 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:18:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:18:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jan 22 11:18:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:18:10 functional-969800 dockerd[8970]: time="2024-01-22T11:18:10.219717628Z" level=info msg="Starting up"
	Jan 22 11:19:10 functional-969800 dockerd[8970]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:19:10 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:19:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:19:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jan 22 11:19:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.735107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:20:10 up 25 min,  0 users,  load average: 0.02, 0.03, 0.06
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:20:10 UTC. --
	Jan 22 11:20:08 functional-969800 kubelet[2697]: E0122 11:20:08.403129    2697 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-969800.17aca63e16ffbc74", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-969800", UID:"eec8edf9285f620f8bcb544dccfdc7a0", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://172.23.182.59:8441/readyz\": dial tcp 172.
23.182.59:8441: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"functional-969800"}, FirstTimestamp:time.Date(2024, time.January, 22, 10, 59, 4, 470617204, time.Local), LastTimestamp:time.Date(2024, time.January, 22, 10, 59, 6, 525768456, time.Local), Count:4, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-969800"}': 'Patch "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-969800.17aca63e16ffbc74": dial tcp 172.23.182.59:8441: connect: connection refused'(may retry after sleeping)
	Jan 22 11:20:08 functional-969800 kubelet[2697]: E0122 11:20:08.770403    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m14.853621441s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.612009    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.612488    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.612679    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.612974    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.613676    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:20:09 functional-969800 kubelet[2697]: E0122 11:20:09.613713    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.411186    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.582587    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.583827    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.584361    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.584407    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589253    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.582193    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589282    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.588981    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589305    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: I0122 11:20:10.589318    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589675    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589801    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.589992    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.590151    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.590269    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:20:10 functional-969800 kubelet[2697]: E0122 11:20:10.590288    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:18:45.622691    6420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:19:10.231729    6420 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.265727    6420 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.301232    6420 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.330652    6420 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.362254    6420 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.394347    6420 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.429343    6420 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:19:10.459804    6420 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:20:10.579273    6420 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:19:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:19:12Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:20:10.674563    6420 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.1196556s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:20:11.442118    1792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (120.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (180.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-969800 get pods
functional_test.go:737: (dbg) Non-zero exit: out\kubectl.exe --context functional-969800 get pods: exit status 1 (10.6595718s)

                                                
                                                
** stderr ** 
	W0122 11:20:23.570563    8548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:20:25.814488    3260 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:20:27.906204    3260 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:20:29.950075    3260 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:20:32.012201    3260 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:20:34.048838    3260 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out\\kubectl.exe --context functional-969800 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (11.969022s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:20:34.249400    8884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
E0122 11:21:27.146798    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (2m25.4284063s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                     | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                            | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:05 UTC | 22 Jan 24 11:07 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:07 UTC | 22 Jan 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:09 UTC | 22 Jan 24 11:11 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                 | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:11 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                 |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache delete                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	| ssh     | functional-969800 ssh sudo                                  | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-969800                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-969800 ssh                                       | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache reload                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC | 22 Jan 24 11:15 UTC |
	| ssh     | functional-969800 ssh                                       | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-969800 kubectl --                                | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:18 UTC |                     |
	|         | --context functional-969800                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:57:40
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:57:40.469886   10176 out.go:296] Setting OutFile to fd 668 ...
	I0122 10:57:40.471315   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.471315   10176 out.go:309] Setting ErrFile to fd 532...
	I0122 10:57:40.471403   10176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:57:40.492855   10176 out.go:303] Setting JSON to false
	I0122 10:57:40.496696   10176 start.go:128] hostinfo: {"hostname":"minikube7","uptime":40429,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:57:40.496696   10176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:57:40.498142   10176 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:57:40.498142   10176 notify.go:220] Checking for updates...
	I0122 10:57:40.499579   10176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:57:40.499736   10176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 10:57:40.500343   10176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:57:40.501324   10176 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 10:57:40.501745   10176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 10:57:40.502974   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:57:40.502974   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:57:45.928080   10176 out.go:177] * Using the hyperv driver based on existing profile
	I0122 10:57:45.928983   10176 start.go:298] selected driver: hyperv
	I0122 10:57:45.928983   10176 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.929543   10176 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 10:57:45.979518   10176 cni.go:84] Creating CNI manager for ""
	I0122 10:57:45.980100   10176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:57:45.980100   10176 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:57:45.980261   10176 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:57:45.981049   10176 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 10:57:45.982349   10176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:57:45.982349   10176 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:57:45.982349   10176 cache.go:56] Caching tarball of preloaded images
	I0122 10:57:45.983003   10176 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 10:57:45.983003   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 10:57:45.983545   10176 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 10:57:45.985896   10176 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 10:57:45.985896   10176 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 10:57:45.985896   10176 start.go:96] Skipping create...Using existing machine configuration
	I0122 10:57:45.985896   10176 fix.go:54] fixHost starting: 
	I0122 10:57:45.985896   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:48.714864   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:48.714864   10176 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 10:57:48.714864   10176 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 10:57:48.716293   10176 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 10:57:48.717336   10176 machine.go:88] provisioning docker machine ...
	I0122 10:57:48.717336   10176 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 10:57:48.717336   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:50.876024   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:50.876229   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:53.470625   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:53.475376   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:53.476351   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:53.476351   10176 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 10:57:53.638784   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 10:57:53.638784   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:57:55.805671   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:55.805815   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:57:58.401038   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:57:58.407147   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:57:58.407915   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:57:58.407915   10176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 10:57:58.548972   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:57:58.548972   10176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 10:57:58.549601   10176 buildroot.go:174] setting up certificates
	I0122 10:57:58.549601   10176 provision.go:83] configureAuth start
	I0122 10:57:58.549601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:00.709217   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:00.709410   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:00.709520   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:03.272329   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:03.272570   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:03.272686   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:05.424282   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:07.989209   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:07.989209   10176 provision.go:138] copyHostCerts
	I0122 10:58:07.989628   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 10:58:07.990020   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 10:58:07.990122   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 10:58:07.990638   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 10:58:07.992053   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 10:58:07.992258   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 10:58:07.992345   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 10:58:07.992434   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 10:58:07.993591   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 10:58:07.993837   10176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 10:58:07.993924   10176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 10:58:07.994335   10176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 10:58:07.995347   10176 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 10:58:08.282789   10176 provision.go:172] copyRemoteCerts
	I0122 10:58:08.298910   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 10:58:08.298910   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:10.409009   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:10.409144   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:10.409236   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:12.976882   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:12.976956   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:12.977542   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:13.085280   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7863488s)
	I0122 10:58:13.085358   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 10:58:13.085992   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 10:58:13.126129   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 10:58:13.126556   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 10:58:13.168097   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 10:58:13.168507   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 10:58:13.217793   10176 provision.go:86] duration metric: configureAuth took 14.6681291s
	I0122 10:58:13.217918   10176 buildroot.go:189] setting minikube options for container-runtime
	I0122 10:58:13.218335   10176 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 10:58:13.218335   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:15.351479   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:15.351579   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:17.930279   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:17.930493   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:17.935586   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:17.936273   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:17.936273   10176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 10:58:18.079216   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 10:58:18.079216   10176 buildroot.go:70] root file system type: tmpfs
	I0122 10:58:18.079787   10176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 10:58:18.079878   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:20.240707   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:22.913834   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:22.919127   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:22.919857   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:22.919857   10176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 10:58:23.085993   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 10:58:23.085993   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:25.254494   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:25.254617   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:27.803581   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:27.803653   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:27.809703   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:27.810422   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:27.810422   10176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 10:58:27.960981   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 10:58:27.960981   10176 machine.go:91] provisioned docker machine in 39.2434773s
	I0122 10:58:27.960981   10176 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 10:58:27.960981   10176 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 10:58:27.976090   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 10:58:27.976090   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:30.146064   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:30.146317   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:32.720717   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:32.720900   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:32.720966   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:32.831028   10176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8548008s)
	I0122 10:58:32.844266   10176 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 10:58:32.851138   10176 command_runner.go:130] > NAME=Buildroot
	I0122 10:58:32.851278   10176 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 10:58:32.851278   10176 command_runner.go:130] > ID=buildroot
	I0122 10:58:32.851490   10176 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 10:58:32.851490   10176 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 10:58:32.851544   10176 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 10:58:32.851585   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 10:58:32.852067   10176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 10:58:32.853202   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 10:58:32.853202   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 10:58:32.854596   10176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 10:58:32.854596   10176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> /etc/test/nested/copy/6396/hosts
	I0122 10:58:32.868168   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 10:58:32.884833   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 10:58:32.932481   10176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 10:58:32.976776   10176 start.go:303] post-start completed in 5.0157728s
	I0122 10:58:32.976776   10176 fix.go:56] fixHost completed within 46.9906788s
	I0122 10:58:32.976776   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:35.143871   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:35.144017   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:37.703612   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:37.703920   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:37.710288   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:37.710854   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:37.710854   10176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 10:58:37.850913   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705921117.851868669
	
	I0122 10:58:37.851011   10176 fix.go:206] guest clock: 1705921117.851868669
	I0122 10:58:37.851011   10176 fix.go:219] Guest: 2024-01-22 10:58:37.851868669 +0000 UTC Remote: 2024-01-22 10:58:32.9767767 +0000 UTC m=+52.675762201 (delta=4.875091969s)
	I0122 10:58:37.851175   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:40.010410   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:40.010601   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:42.618291   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:42.618378   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:42.625417   10176 main.go:141] libmachine: Using SSH client type: native
	I0122 10:58:42.626844   10176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd66240] 0xd68d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 10:58:42.626844   10176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705921117
	I0122 10:58:42.774588   10176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 10:58:37 UTC 2024
	
	I0122 10:58:42.774675   10176 fix.go:226] clock set: Mon Jan 22 10:58:37 UTC 2024
	 (err=<nil>)
	I0122 10:58:42.774675   10176 start.go:83] releasing machines lock for "functional-969800", held for 56.7885347s
	I0122 10:58:42.774814   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:44.916786   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:47.456679   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:47.461274   10176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 10:58:47.461385   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:47.472541   10176 ssh_runner.go:195] Run: cat /version.json
	I0122 10:58:47.472541   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:49.704663   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:49.705114   10176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 10:58:52.392500   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.392560   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.392560   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 10:58:52.411796   10176 main.go:141] libmachine: [stderr =====>] : 
	I0122 10:58:52.412754   10176 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 10:58:52.487973   10176 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 10:58:52.488809   10176 ssh_runner.go:235] Completed: cat /version.json: (5.0162464s)
	I0122 10:58:52.502299   10176 ssh_runner.go:195] Run: systemctl --version
	I0122 10:58:52.579182   10176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 10:58:52.579508   10176 command_runner.go:130] > systemd 247 (247)
	I0122 10:58:52.579508   10176 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 10:58:52.579508   10176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1182111s)
	I0122 10:58:52.592839   10176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 10:58:52.600589   10176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 10:58:52.601287   10176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 10:58:52.614022   10176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 10:58:52.629161   10176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 10:58:52.629512   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:52.629804   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:52.662423   10176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 10:58:52.677419   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 10:58:52.706434   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 10:58:52.723206   10176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 10:58:52.736346   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 10:58:52.766629   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.795500   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 10:58:52.825136   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 10:58:52.853412   10176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 10:58:52.881996   10176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 10:58:52.913498   10176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 10:58:52.930750   10176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 10:58:52.944733   10176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 10:58:52.973118   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:53.199921   10176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 10:58:53.229360   10176 start.go:475] detecting cgroup driver to use...
	I0122 10:58:53.244205   10176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 10:58:53.265654   10176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Unit]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 10:58:53.265856   10176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 10:58:53.265856   10176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 10:58:53.265856   10176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitBurst=3
	I0122 10:58:53.265856   10176 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 10:58:53.265856   10176 command_runner.go:130] > [Service]
	I0122 10:58:53.265856   10176 command_runner.go:130] > Type=notify
	I0122 10:58:53.265856   10176 command_runner.go:130] > Restart=on-failure
	I0122 10:58:53.265856   10176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 10:58:53.265856   10176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 10:58:53.265856   10176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 10:58:53.265856   10176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 10:58:53.265856   10176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 10:58:53.265856   10176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 10:58:53.265856   10176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 10:58:53.265856   10176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 10:58:53.265856   10176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNOFILE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitNPROC=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > LimitCORE=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 10:58:53.265856   10176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 10:58:53.265856   10176 command_runner.go:130] > TasksMax=infinity
	I0122 10:58:53.265856   10176 command_runner.go:130] > TimeoutStartSec=0
	I0122 10:58:53.265856   10176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 10:58:53.265856   10176 command_runner.go:130] > Delegate=yes
	I0122 10:58:53.265856   10176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 10:58:53.265856   10176 command_runner.go:130] > KillMode=process
	I0122 10:58:53.266496   10176 command_runner.go:130] > [Install]
	I0122 10:58:53.266496   10176 command_runner.go:130] > WantedBy=multi-user.target
	I0122 10:58:53.282083   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.311447   10176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 10:58:53.359743   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 10:58:53.396608   10176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 10:58:53.415523   10176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 10:58:53.444264   10176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 10:58:53.456104   10176 ssh_runner.go:195] Run: which cri-dockerd
	I0122 10:58:53.461931   10176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 10:58:53.476026   10176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 10:58:53.490730   10176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 10:58:53.529862   10176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 10:58:53.744013   10176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 10:58:53.975729   10176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 10:58:53.975983   10176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 10:58:54.028787   10176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 10:58:54.249447   10176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:00:05.628463   10176 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0122 11:00:05.628515   10176 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0122 11:00:05.630958   10176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3811974s)
	I0122 11:00:05.644549   10176 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:00:05.669673   10176 command_runner.go:130] > -- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	I0122 11:00:05.670340   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.670433   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670523   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670535   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670609   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670681   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.670751   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.670820   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.670889   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.670974   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.671041   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.671109   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671176   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671253   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671301   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.671392   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671483   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671557   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671627   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671699   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671770   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671842   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.671918   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.672016   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.672104   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.672183   10176 command_runner.go:130] > Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.672248   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.672329   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.672385   10176 command_runner.go:130] > Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.672460   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	I0122 11:00:05.672518   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.672574   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.672596   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672649   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672725   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672833   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672888   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.672929   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.672956   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.674886   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675068   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675164   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.675282   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	I0122 11:00:05.675396   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	I0122 11:00:05.675484   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.675563   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.675679   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.675751   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.675824   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.675992   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.676061   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	I0122 11:00:05.676135   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0122 11:00:05.676234   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676308   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676345   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676418   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676492   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0122 11:00:05.676564   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0122 11:00:05.676594   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0122 11:00:05.676653   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0122 11:00:05.676683   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677265   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677410   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0122 11:00:05.677435   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678028   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678165   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678708   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678852   10176 command_runner.go:130] > Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678884   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678933   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	I0122 11:00:05.678963   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	I0122 11:00:05.679489   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	I0122 11:00:05.679584   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	I0122 11:00:05.680179   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0122 11:00:05.680310   10176 command_runner.go:130] > Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	I0122 11:00:05.703571   10176 out.go:177] 
	W0122 11:00:05.704224   10176 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:00:05 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:00:05.705403   10176 out.go:239] * 
	W0122 11:00:05.706768   10176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:00:05.707971   10176 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:22:11 UTC. --
	Jan 22 11:19:10 functional-969800 dockerd[9236]: time="2024-01-22T11:19:10.567700970Z" level=info msg="Starting up"
	Jan 22 11:20:10 functional-969800 dockerd[9236]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:20:10 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:20:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:20:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jan 22 11:20:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:20:10 functional-969800 dockerd[9393]: time="2024-01-22T11:20:10.785376681Z" level=info msg="Starting up"
	Jan 22 11:21:10 functional-969800 dockerd[9393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:21:10 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:21:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:21:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jan 22 11:21:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:21:11 functional-969800 dockerd[9646]: time="2024-01-22T11:21:11.049728847Z" level=info msg="Starting up"
	Jan 22 11:22:11 functional-969800 dockerd[9646]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:22:11 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:22:11Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:22:11 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:22:11Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:22:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.735107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:23:11 up 28 min,  0 users,  load average: 0.00, 0.01, 0.04
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:23:11 UTC. --
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.123873    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.124343    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.124892    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.125516    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.125793    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.125884    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:23:03 functional-969800 kubelet[2697]: E0122 11:23:03.803927    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 24m9.887190881s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:23:05 functional-969800 kubelet[2697]: E0122 11:23:05.456607    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:23:08 functional-969800 kubelet[2697]: E0122 11:23:08.805194    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 24m14.888450039s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.285581    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.289687    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.289788    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.290034    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.290334    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.291079    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.291597    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.291773    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.291875    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.292499    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.292617    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.293947    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: I0122 11:23:11.292081    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.293878    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.296114    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:23:11 functional-969800 kubelet[2697]: E0122 11:23:11.296672    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:20:46.191050    7504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:21:10.795956    7504 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:21:10.836100    7504 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:21:10.867846    7504 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:21:10.900440    7504 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:21:10.935453    7504 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:21:10.967521    7504 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:22:11.064266    7504 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:22:11.100733    7504 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:23:11.288650    7504 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:22:11Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:22:11Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = Unknown desc = failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:23:11.384124    7504 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
E0122 11:23:23.958851    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.1730185s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:23:12.072958    4316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (180.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (300.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-969800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-969800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m47.9204885s)

                                                
                                                
-- stdout --
	* [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node functional-969800 in cluster functional-969800
	* Updating the running hyperv "functional-969800" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:23:24.236860    6184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:26:11 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 22 11:00:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:00:05 functional-969800 dockerd[5820]: time="2024-01-22T11:00:05.817795905Z" level=info msg="Starting up"
	Jan 22 11:01:05 functional-969800 dockerd[5820]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:01:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:01:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:01:06 functional-969800 dockerd[6034]: time="2024-01-22T11:01:06.042578101Z" level=info msg="Starting up"
	Jan 22 11:02:06 functional-969800 dockerd[6034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jan 22 11:02:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:02:06 functional-969800 dockerd[6195]: time="2024-01-22T11:02:06.252465592Z" level=info msg="Starting up"
	Jan 22 11:03:06 functional-969800 dockerd[6195]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:03:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jan 22 11:03:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:03:06 functional-969800 dockerd[6342]: time="2024-01-22T11:03:06.467155089Z" level=info msg="Starting up"
	Jan 22 11:04:06 functional-969800 dockerd[6342]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:04:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jan 22 11:04:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:04:06 functional-969800 dockerd[6610]: time="2024-01-22T11:04:06.803106897Z" level=info msg="Starting up"
	Jan 22 11:05:06 functional-969800 dockerd[6610]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:05:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jan 22 11:05:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:05:07 functional-969800 dockerd[6768]: time="2024-01-22T11:05:07.038180841Z" level=info msg="Starting up"
	Jan 22 11:06:07 functional-969800 dockerd[6768]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:06:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jan 22 11:06:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:06:07 functional-969800 dockerd[6963]: time="2024-01-22T11:06:07.289156413Z" level=info msg="Starting up"
	Jan 22 11:07:07 functional-969800 dockerd[6963]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:07:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jan 22 11:07:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:07:07 functional-969800 dockerd[7116]: time="2024-01-22T11:07:07.566278979Z" level=info msg="Starting up"
	Jan 22 11:08:07 functional-969800 dockerd[7116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:08:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jan 22 11:08:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:08:07 functional-969800 dockerd[7273]: time="2024-01-22T11:08:07.800042513Z" level=info msg="Starting up"
	Jan 22 11:09:07 functional-969800 dockerd[7273]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:09:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jan 22 11:09:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:09:08 functional-969800 dockerd[7420]: time="2024-01-22T11:09:08.055663358Z" level=info msg="Starting up"
	Jan 22 11:10:08 functional-969800 dockerd[7420]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:10:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jan 22 11:10:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:10:08 functional-969800 dockerd[7582]: time="2024-01-22T11:10:08.298276757Z" level=info msg="Starting up"
	Jan 22 11:11:08 functional-969800 dockerd[7582]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:11:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jan 22 11:11:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:11:08 functional-969800 dockerd[7730]: time="2024-01-22T11:11:08.568515683Z" level=info msg="Starting up"
	Jan 22 11:12:08 functional-969800 dockerd[7730]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:12:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jan 22 11:12:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:12:08 functional-969800 dockerd[7901]: time="2024-01-22T11:12:08.793897827Z" level=info msg="Starting up"
	Jan 22 11:13:08 functional-969800 dockerd[7901]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:13:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jan 22 11:13:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:13:09 functional-969800 dockerd[8084]: time="2024-01-22T11:13:09.049002906Z" level=info msg="Starting up"
	Jan 22 11:14:09 functional-969800 dockerd[8084]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:14:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jan 22 11:14:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:14:09 functional-969800 dockerd[8262]: time="2024-01-22T11:14:09.310111876Z" level=info msg="Starting up"
	Jan 22 11:15:09 functional-969800 dockerd[8262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:15:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jan 22 11:15:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:15:09 functional-969800 dockerd[8424]: time="2024-01-22T11:15:09.550451227Z" level=info msg="Starting up"
	Jan 22 11:16:09 functional-969800 dockerd[8424]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:16:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jan 22 11:16:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:16:09 functional-969800 dockerd[8663]: time="2024-01-22T11:16:09.793536981Z" level=info msg="Starting up"
	Jan 22 11:17:09 functional-969800 dockerd[8663]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:17:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jan 22 11:17:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:17:09 functional-969800 dockerd[8816]: time="2024-01-22T11:17:09.993681624Z" level=info msg="Starting up"
	Jan 22 11:18:10 functional-969800 dockerd[8816]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:18:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jan 22 11:18:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:18:10 functional-969800 dockerd[8970]: time="2024-01-22T11:18:10.219717628Z" level=info msg="Starting up"
	Jan 22 11:19:10 functional-969800 dockerd[8970]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:19:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jan 22 11:19:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:19:10 functional-969800 dockerd[9236]: time="2024-01-22T11:19:10.567700970Z" level=info msg="Starting up"
	Jan 22 11:20:10 functional-969800 dockerd[9236]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:20:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jan 22 11:20:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:20:10 functional-969800 dockerd[9393]: time="2024-01-22T11:20:10.785376681Z" level=info msg="Starting up"
	Jan 22 11:21:10 functional-969800 dockerd[9393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:21:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jan 22 11:21:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:21:11 functional-969800 dockerd[9646]: time="2024-01-22T11:21:11.049728847Z" level=info msg="Starting up"
	Jan 22 11:22:11 functional-969800 dockerd[9646]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:22:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jan 22 11:22:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:22:11 functional-969800 dockerd[9809]: time="2024-01-22T11:22:11.276508964Z" level=info msg="Starting up"
	Jan 22 11:23:11 functional-969800 dockerd[9809]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:23:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jan 22 11:23:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:23:11 functional-969800 dockerd[9961]: time="2024-01-22T11:23:11.489383259Z" level=info msg="Starting up"
	Jan 22 11:24:11 functional-969800 dockerd[9961]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:24:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jan 22 11:24:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:24:11 functional-969800 dockerd[10291]: time="2024-01-22T11:24:11.800722506Z" level=info msg="Starting up"
	Jan 22 11:24:46 functional-969800 dockerd[10291]: time="2024-01-22T11:24:46.714262322Z" level=info msg="Processing signal 'terminated'"
	Jan 22 11:25:11 functional-969800 dockerd[10291]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:25:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:25:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:25:11 functional-969800 dockerd[10628]: time="2024-01-22T11:25:11.884976767Z" level=info msg="Starting up"
	Jan 22 11:26:11 functional-969800 dockerd[10628]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:26:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-969800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m48.0583672s for "functional-969800" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (11.999345s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:26:12.306657    9928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (1m48.52174s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                                         | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:05 UTC | 22 Jan 24 11:07 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:07 UTC | 22 Jan 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:09 UTC | 22 Jan 24 11:11 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:11 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                              |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache delete                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	| ssh     | functional-969800 ssh sudo                                               | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-969800                                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-969800 ssh                                                    | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache reload                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC | 22 Jan 24 11:15 UTC |
	| ssh     | functional-969800 ssh                                                    | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-969800 kubectl --                                             | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:18 UTC |                     |
	|         | --context functional-969800                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:23 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 11:23:24
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 11:23:24.307882    6184 out.go:296] Setting OutFile to fd 720 ...
	I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 11:23:24.309149    6184 out.go:309] Setting ErrFile to fd 752...
	I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 11:23:24.329584    6184 out.go:303] Setting JSON to false
	I0122 11:23:24.332927    6184 start.go:128] hostinfo: {"hostname":"minikube7","uptime":41973,"bootTime":1705880631,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 11:23:24.332998    6184 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 11:23:24.334000    6184 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 11:23:24.334878    6184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 11:23:24.335648    6184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 11:23:24.334878    6184 notify.go:220] Checking for updates...
	I0122 11:23:24.336190    6184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 11:23:24.338931    6184 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 11:23:24.340085    6184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 11:23:24.341323    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 11:23:24.341323    6184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 11:23:29.718284    6184 out.go:177] * Using the hyperv driver based on existing profile
	I0122 11:23:29.718955    6184 start.go:298] selected driver: hyperv
	I0122 11:23:29.718955    6184 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 11:23:29.719194    6184 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 11:23:29.768294    6184 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 11:23:29.768294    6184 cni.go:84] Creating CNI manager for ""
	I0122 11:23:29.768294    6184 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 11:23:29.768294    6184 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 11:23:29.769226    6184 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 11:23:29.770231    6184 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 11:23:29.771234    6184 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 11:23:29.771234    6184 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 11:23:29.771234    6184 cache.go:56] Caching tarball of preloaded images
	I0122 11:23:29.771234    6184 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 11:23:29.772224    6184 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 11:23:29.772224    6184 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 11:23:29.774224    6184 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 11:23:29.774224    6184 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 11:23:29.775224    6184 start.go:96] Skipping create...Using existing machine configuration
	I0122 11:23:29.775224    6184 fix.go:54] fixHost starting: 
	I0122 11:23:29.775224    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:32.481029    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:32.481029    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:32.481029    6184 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 11:23:32.481029    6184 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 11:23:32.482168    6184 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 11:23:32.483228    6184 machine.go:88] provisioning docker machine ...
	I0122 11:23:32.483228    6184 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 11:23:32.483314    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:34.645849    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:34.646010    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:34.646127    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:37.134046    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:37.134046    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:37.140870    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:23:37.141572    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:23:37.141572    6184 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 11:23:37.301806    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 11:23:37.301806    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:39.427005    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:39.427005    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:39.427118    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:41.953477    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:41.953542    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:41.959195    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:23:41.959820    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:23:41.959820    6184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 11:23:42.101406    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 11:23:42.101406    6184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 11:23:42.101975    6184 buildroot.go:174] setting up certificates
	I0122 11:23:42.102048    6184 provision.go:83] configureAuth start
	I0122 11:23:42.102048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:44.258658    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:44.258658    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:44.258841    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:46.770030    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:46.770030    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:46.770279    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:51.359173    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:51.359173    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:51.359173    6184 provision.go:138] copyHostCerts
	I0122 11:23:51.359418    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 11:23:51.359418    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 11:23:51.360221    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 11:23:51.361961    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 11:23:51.361961    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 11:23:51.362031    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 11:23:51.363530    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 11:23:51.363530    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 11:23:51.363530    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 11:23:51.364934    6184 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 11:23:51.531845    6184 provision.go:172] copyRemoteCerts
	I0122 11:23:51.548094    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 11:23:51.548094    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:56.185478    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:56.185774    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:56.186223    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:23:56.297102    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7489862s)
	I0122 11:23:56.297740    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 11:23:56.335241    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 11:23:56.374000    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 11:23:56.413292    6184 provision.go:86] duration metric: configureAuth took 14.3111796s
	I0122 11:23:56.413292    6184 buildroot.go:189] setting minikube options for container-runtime
	I0122 11:23:56.413292    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 11:23:56.413869    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:01.056353    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:01.056452    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:01.061778    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:01.062464    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:01.062464    6184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 11:24:01.204273    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 11:24:01.204273    6184 buildroot.go:70] root file system type: tmpfs
	I0122 11:24:01.204617    6184 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 11:24:01.204668    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:05.905128    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:05.905128    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:05.911339    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:05.912244    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:05.912244    6184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 11:24:06.073724    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 11:24:06.073724    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:08.229437    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:08.229755    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:08.229755    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:10.819459    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:10.819459    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:10.825018    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:10.825726    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:10.825726    6184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 11:24:10.978172    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 11:24:10.978172    6184 machine.go:91] provisioned docker machine in 38.4947703s
	I0122 11:24:10.978172    6184 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 11:24:10.978172    6184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 11:24:10.991666    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 11:24:10.991666    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:13.158955    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:13.159108    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:13.159108    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:15.702276    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:15.702276    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:15.702816    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:15.809972    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.818285s)
	I0122 11:24:15.825715    6184 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 11:24:15.831293    6184 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 11:24:15.831293    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 11:24:15.831873    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 11:24:15.833289    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 11:24:15.834521    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 11:24:15.845569    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 11:24:15.860176    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 11:24:15.898930    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 11:24:15.935785    6184 start.go:303] post-start completed in 4.9575914s
	I0122 11:24:15.935912    6184 fix.go:56] fixHost completed within 46.1604801s
	I0122 11:24:15.935989    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:18.084871    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:18.085048    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:18.085048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:20.652032    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:20.652032    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:20.657685    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:20.658373    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:20.658373    6184 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 11:24:20.786672    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705922660.787395205
	
	I0122 11:24:20.786672    6184 fix.go:206] guest clock: 1705922660.787395205
	I0122 11:24:20.786672    6184 fix.go:219] Guest: 2024-01-22 11:24:20.787395205 +0000 UTC Remote: 2024-01-22 11:24:15.9359125 +0000 UTC m=+51.799138601 (delta=4.851482705s)
	I0122 11:24:20.786785    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:22.947432    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:22.947432    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:22.947617    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:25.472703    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:25.472703    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:25.478467    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:25.479226    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:25.479226    6184 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705922660
	I0122 11:24:25.631726    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 11:24:20 UTC 2024
	
	I0122 11:24:25.631726    6184 fix.go:226] clock set: Mon Jan 22 11:24:20 UTC 2024
	 (err=<nil>)
	I0122 11:24:25.631726    6184 start.go:83] releasing machines lock for "functional-969800", held for 55.8572498s
	I0122 11:24:25.632332    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:27.715614    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:27.715614    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:27.715955    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:30.263943    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:30.264303    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:30.269166    6184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 11:24:30.269244    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:30.283449    6184 ssh_runner.go:195] Run: cat /version.json
	I0122 11:24:30.283449    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:32.476203    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:32.476319    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:32.476319    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:35.056514    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:35.056514    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:35.056514    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:35.103081    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:35.103081    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:35.103551    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:45.230724    6184 ssh_runner.go:235] Completed: cat /version.json: (14.947208s)
	I0122 11:24:45.230948    6184 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (14.9616545s)
	W0122 11:24:45.230948    6184 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0122 11:24:45.230948    6184 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0122 11:24:45.230948    6184 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0122 11:24:45.247121    6184 ssh_runner.go:195] Run: systemctl --version
	I0122 11:24:45.268459    6184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 11:24:45.276863    6184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 11:24:45.289111    6184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 11:24:45.302602    6184 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 11:24:45.302602    6184 start.go:475] detecting cgroup driver to use...
	I0122 11:24:45.302987    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 11:24:45.344679    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 11:24:45.376694    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 11:24:45.391228    6184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 11:24:45.403401    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 11:24:45.434227    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 11:24:45.467889    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 11:24:45.495529    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 11:24:45.526566    6184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 11:24:45.555545    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 11:24:45.586280    6184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 11:24:45.619757    6184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 11:24:45.651490    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 11:24:45.826436    6184 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 11:24:45.853776    6184 start.go:475] detecting cgroup driver to use...
	I0122 11:24:45.866418    6184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 11:24:45.900041    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 11:24:45.933396    6184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 11:24:45.974943    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 11:24:46.005340    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 11:24:46.026136    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 11:24:46.068325    6184 ssh_runner.go:195] Run: which cri-dockerd
	I0122 11:24:46.086267    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 11:24:46.099080    6184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 11:24:46.138821    6184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 11:24:46.316107    6184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 11:24:46.478342    6184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 11:24:46.478342    6184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 11:24:46.517039    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 11:24:46.692069    6184 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:26:11.896948    6184 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m25.2043524s)
	I0122 11:26:11.910605    6184 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:26:11.977390    6184 out.go:177] 
	W0122 11:26:11.978052    6184 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:26:11 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 22 11:00:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:00:05 functional-969800 dockerd[5820]: time="2024-01-22T11:00:05.817795905Z" level=info msg="Starting up"
	Jan 22 11:01:05 functional-969800 dockerd[5820]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:01:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:01:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:01:06 functional-969800 dockerd[6034]: time="2024-01-22T11:01:06.042578101Z" level=info msg="Starting up"
	Jan 22 11:02:06 functional-969800 dockerd[6034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jan 22 11:02:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:02:06 functional-969800 dockerd[6195]: time="2024-01-22T11:02:06.252465592Z" level=info msg="Starting up"
	Jan 22 11:03:06 functional-969800 dockerd[6195]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:03:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jan 22 11:03:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:03:06 functional-969800 dockerd[6342]: time="2024-01-22T11:03:06.467155089Z" level=info msg="Starting up"
	Jan 22 11:04:06 functional-969800 dockerd[6342]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:04:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jan 22 11:04:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:04:06 functional-969800 dockerd[6610]: time="2024-01-22T11:04:06.803106897Z" level=info msg="Starting up"
	Jan 22 11:05:06 functional-969800 dockerd[6610]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:05:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jan 22 11:05:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:05:07 functional-969800 dockerd[6768]: time="2024-01-22T11:05:07.038180841Z" level=info msg="Starting up"
	Jan 22 11:06:07 functional-969800 dockerd[6768]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:06:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jan 22 11:06:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:06:07 functional-969800 dockerd[6963]: time="2024-01-22T11:06:07.289156413Z" level=info msg="Starting up"
	Jan 22 11:07:07 functional-969800 dockerd[6963]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:07:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jan 22 11:07:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:07:07 functional-969800 dockerd[7116]: time="2024-01-22T11:07:07.566278979Z" level=info msg="Starting up"
	Jan 22 11:08:07 functional-969800 dockerd[7116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:08:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jan 22 11:08:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:08:07 functional-969800 dockerd[7273]: time="2024-01-22T11:08:07.800042513Z" level=info msg="Starting up"
	Jan 22 11:09:07 functional-969800 dockerd[7273]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:09:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jan 22 11:09:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:09:08 functional-969800 dockerd[7420]: time="2024-01-22T11:09:08.055663358Z" level=info msg="Starting up"
	Jan 22 11:10:08 functional-969800 dockerd[7420]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:10:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jan 22 11:10:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:10:08 functional-969800 dockerd[7582]: time="2024-01-22T11:10:08.298276757Z" level=info msg="Starting up"
	Jan 22 11:11:08 functional-969800 dockerd[7582]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:11:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jan 22 11:11:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:11:08 functional-969800 dockerd[7730]: time="2024-01-22T11:11:08.568515683Z" level=info msg="Starting up"
	Jan 22 11:12:08 functional-969800 dockerd[7730]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:12:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jan 22 11:12:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:12:08 functional-969800 dockerd[7901]: time="2024-01-22T11:12:08.793897827Z" level=info msg="Starting up"
	Jan 22 11:13:08 functional-969800 dockerd[7901]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:13:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jan 22 11:13:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:13:09 functional-969800 dockerd[8084]: time="2024-01-22T11:13:09.049002906Z" level=info msg="Starting up"
	Jan 22 11:14:09 functional-969800 dockerd[8084]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:14:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jan 22 11:14:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:14:09 functional-969800 dockerd[8262]: time="2024-01-22T11:14:09.310111876Z" level=info msg="Starting up"
	Jan 22 11:15:09 functional-969800 dockerd[8262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:15:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jan 22 11:15:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:15:09 functional-969800 dockerd[8424]: time="2024-01-22T11:15:09.550451227Z" level=info msg="Starting up"
	Jan 22 11:16:09 functional-969800 dockerd[8424]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:16:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jan 22 11:16:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:16:09 functional-969800 dockerd[8663]: time="2024-01-22T11:16:09.793536981Z" level=info msg="Starting up"
	Jan 22 11:17:09 functional-969800 dockerd[8663]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:17:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jan 22 11:17:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:17:09 functional-969800 dockerd[8816]: time="2024-01-22T11:17:09.993681624Z" level=info msg="Starting up"
	Jan 22 11:18:10 functional-969800 dockerd[8816]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:18:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jan 22 11:18:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:18:10 functional-969800 dockerd[8970]: time="2024-01-22T11:18:10.219717628Z" level=info msg="Starting up"
	Jan 22 11:19:10 functional-969800 dockerd[8970]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:19:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jan 22 11:19:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:19:10 functional-969800 dockerd[9236]: time="2024-01-22T11:19:10.567700970Z" level=info msg="Starting up"
	Jan 22 11:20:10 functional-969800 dockerd[9236]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:20:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jan 22 11:20:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:20:10 functional-969800 dockerd[9393]: time="2024-01-22T11:20:10.785376681Z" level=info msg="Starting up"
	Jan 22 11:21:10 functional-969800 dockerd[9393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:21:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jan 22 11:21:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:21:11 functional-969800 dockerd[9646]: time="2024-01-22T11:21:11.049728847Z" level=info msg="Starting up"
	Jan 22 11:22:11 functional-969800 dockerd[9646]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:22:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jan 22 11:22:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:22:11 functional-969800 dockerd[9809]: time="2024-01-22T11:22:11.276508964Z" level=info msg="Starting up"
	Jan 22 11:23:11 functional-969800 dockerd[9809]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:23:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jan 22 11:23:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:23:11 functional-969800 dockerd[9961]: time="2024-01-22T11:23:11.489383259Z" level=info msg="Starting up"
	Jan 22 11:24:11 functional-969800 dockerd[9961]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:24:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jan 22 11:24:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:24:11 functional-969800 dockerd[10291]: time="2024-01-22T11:24:11.800722506Z" level=info msg="Starting up"
	Jan 22 11:24:46 functional-969800 dockerd[10291]: time="2024-01-22T11:24:46.714262322Z" level=info msg="Processing signal 'terminated'"
	Jan 22 11:25:11 functional-969800 dockerd[10291]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:25:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:25:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:25:11 functional-969800 dockerd[10628]: time="2024-01-22T11:25:11.884976767Z" level=info msg="Starting up"
	Jan 22 11:26:11 functional-969800 dockerd[10628]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:26:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:26:11.979558    6184 out.go:239] * 
	W0122 11:26:11.981138    6184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:26:11.982705    6184 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:27:12 UTC. --
	Jan 22 11:25:11 functional-969800 dockerd[10291]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:25:11 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:25:11Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:25:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:25:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:25:11 functional-969800 dockerd[10628]: time="2024-01-22T11:25:11.884976767Z" level=info msg="Starting up"
	Jan 22 11:26:11 functional-969800 dockerd[10628]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:26:11 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:26:11Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:26:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:26:12 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 22 11:26:12 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:26:12 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:26:12 functional-969800 dockerd[10769]: time="2024-01-22T11:26:12.128745714Z" level=info msg="Starting up"
	Jan 22 11:27:12 functional-969800 dockerd[10769]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:27:12 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:27:12Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:27:12 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:27:12 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:27:12Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:27:12 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:27:12 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	[Jan22 11:24] systemd-fstab-generator[10504]: Ignoring "noauto" for root device
	[  +0.487513] systemd-fstab-generator[10540]: Ignoring "noauto" for root device
	[  +0.178599] systemd-fstab-generator[10555]: Ignoring "noauto" for root device
	[  +0.195768] systemd-fstab-generator[10569]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:28:12 up 33 min,  0 users,  load average: 0.06, 0.03, 0.02
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:28:12 UTC. --
	Jan 22 11:28:06 functional-969800 kubelet[2697]: E0122 11:28:06.535625    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:28:06 functional-969800 kubelet[2697]: E0122 11:28:06.950704    2697 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-5dd5756b68-jcpvf.17aca63f3f07cb04", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-5dd5756b68-jcpvf", UID:"58b0ca36-9430-4532-86e4-efac48d6ca75", APIVersion:"v1", ResourceVersion:"330", FieldPath:"spec.containers{coredns}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"http://10.244.0.2:8181/ready\": context deadline exceeded (Client.
Timeout exceeded while awaiting headers)", Source:v1.EventSource{Component:"kubelet", Host:"functional-969800"}, FirstTimestamp:time.Date(2024, time.January, 22, 10, 59, 9, 437201156, time.Local), LastTimestamp:time.Date(2024, time.January, 22, 10, 59, 9, 437201156, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-969800"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 172.23.182.59:8441: connect: connection refused'(may retry after sleeping)
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.312566    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.313023    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.313305    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.313485    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.313872    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.313909    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:28:08 functional-969800 kubelet[2697]: E0122 11:28:08.864119    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 29m14.946395212s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.488240    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.491676    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: I0122 11:28:12.495893    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.495299    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.496221    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.491581    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.491378    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.495396    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.495466    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.496634    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.496893    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.497222    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.497359    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.501019    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.501200    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:28:12 functional-969800 kubelet[2697]: E0122 11:28:12.501639    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:26:24.290709   10988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:27:12.141892   10988 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.182220   10988 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.213657   10988 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.246218   10988 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.277194   10988 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.308894   10988 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.343315   10988 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:27:12.378136   10988 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:28:12.486005   10988 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:27:14Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:27:14Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:28:12.588698   10988 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
E0122 11:28:23.956469    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.0088047s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:28:13.134931    9344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (300.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (120.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-969800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-969800 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (10.4342476s)

                                                
                                                
** stderr ** 
	E0122 11:28:27.201507    2932 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:28:29.326232    2932 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:28:31.366363    2932 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:28:33.410435    2932 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	E0122 11:28:35.454399    2932 memcache.go:265] couldn't get current server API group list: Get "https://172.23.182.59:8441/api?timeout=32s": dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.23.182.59:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-969800 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-969800 -n functional-969800: exit status 2 (11.966971s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:28:35.597091    1872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs -n 25: (1m25.8209644s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-526800 --log_dir                                                  | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-526800                                                         | nospam-526800     | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:05 UTC | 22 Jan 24 11:07 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:07 UTC | 22 Jan 24 11:09 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:09 UTC | 22 Jan 24 11:11 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache add                                              | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:11 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                              |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache delete                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | minikube-local-cache-test:functional-969800                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
	| ssh     | functional-969800 ssh sudo                                               | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-969800                                                        | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-969800 ssh                                                    | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-969800 cache reload                                           | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC | 22 Jan 24 11:15 UTC |
	| ssh     | functional-969800 ssh                                                    | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-969800 kubectl --                                             | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:18 UTC |                     |
	|         | --context functional-969800                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-969800                                                     | functional-969800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:23 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 11:23:24
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 11:23:24.307882    6184 out.go:296] Setting OutFile to fd 720 ...
	I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 11:23:24.309149    6184 out.go:309] Setting ErrFile to fd 752...
	I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 11:23:24.329584    6184 out.go:303] Setting JSON to false
	I0122 11:23:24.332927    6184 start.go:128] hostinfo: {"hostname":"minikube7","uptime":41973,"bootTime":1705880631,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 11:23:24.332998    6184 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 11:23:24.334000    6184 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 11:23:24.334878    6184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 11:23:24.335648    6184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 11:23:24.334878    6184 notify.go:220] Checking for updates...
	I0122 11:23:24.336190    6184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 11:23:24.338931    6184 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 11:23:24.340085    6184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 11:23:24.341323    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 11:23:24.341323    6184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 11:23:29.718284    6184 out.go:177] * Using the hyperv driver based on existing profile
	I0122 11:23:29.718955    6184 start.go:298] selected driver: hyperv
	I0122 11:23:29.718955    6184 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 11:23:29.719194    6184 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 11:23:29.768294    6184 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 11:23:29.768294    6184 cni.go:84] Creating CNI manager for ""
	I0122 11:23:29.768294    6184 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 11:23:29.768294    6184 start_flags.go:321] config:
	{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 11:23:29.769226    6184 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 11:23:29.770231    6184 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
	I0122 11:23:29.771234    6184 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 11:23:29.771234    6184 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 11:23:29.771234    6184 cache.go:56] Caching tarball of preloaded images
	I0122 11:23:29.771234    6184 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 11:23:29.772224    6184 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 11:23:29.772224    6184 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
	I0122 11:23:29.774224    6184 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 11:23:29.774224    6184 start.go:369] acquired machines lock for "functional-969800" in 0s
	I0122 11:23:29.775224    6184 start.go:96] Skipping create...Using existing machine configuration
	I0122 11:23:29.775224    6184 fix.go:54] fixHost starting: 
	I0122 11:23:29.775224    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:32.481029    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:32.481029    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:32.481029    6184 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
	W0122 11:23:32.481029    6184 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 11:23:32.482168    6184 out.go:177] * Updating the running hyperv "functional-969800" VM ...
	I0122 11:23:32.483228    6184 machine.go:88] provisioning docker machine ...
	I0122 11:23:32.483228    6184 buildroot.go:166] provisioning hostname "functional-969800"
	I0122 11:23:32.483314    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:34.645849    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:34.646010    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:34.646127    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:37.134046    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:37.134046    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:37.140870    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:23:37.141572    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:23:37.141572    6184 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
	I0122 11:23:37.301806    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800
	
	I0122 11:23:37.301806    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:39.427005    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:39.427005    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:39.427118    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:41.953477    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:41.953542    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:41.959195    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:23:41.959820    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:23:41.959820    6184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 11:23:42.101406    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 11:23:42.101406    6184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 11:23:42.101975    6184 buildroot.go:174] setting up certificates
	I0122 11:23:42.102048    6184 provision.go:83] configureAuth start
	I0122 11:23:42.102048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:44.258658    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:44.258658    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:44.258841    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:46.770030    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:46.770030    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:46.770279    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:48.845024    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:51.359173    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:51.359173    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:51.359173    6184 provision.go:138] copyHostCerts
	I0122 11:23:51.359418    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 11:23:51.359418    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 11:23:51.360221    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 11:23:51.361961    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 11:23:51.361961    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 11:23:51.362031    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 11:23:51.363530    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 11:23:51.363530    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 11:23:51.363530    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 11:23:51.364934    6184 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
	I0122 11:23:51.531845    6184 provision.go:172] copyRemoteCerts
	I0122 11:23:51.548094    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 11:23:51.548094    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:53.701592    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:23:56.185478    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:23:56.185774    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:56.186223    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:23:56.297102    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7489862s)
	I0122 11:23:56.297740    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 11:23:56.335241    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0122 11:23:56.374000    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 11:23:56.413292    6184 provision.go:86] duration metric: configureAuth took 14.3111796s
	I0122 11:23:56.413292    6184 buildroot.go:189] setting minikube options for container-runtime
	I0122 11:23:56.413292    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 11:23:56.413869    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:23:58.538141    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:01.056353    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:01.056452    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:01.061778    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:01.062464    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:01.062464    6184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 11:24:01.204273    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 11:24:01.204273    6184 buildroot.go:70] root file system type: tmpfs
	I0122 11:24:01.204617    6184 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 11:24:01.204668    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:03.346406    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:05.905128    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:05.905128    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:05.911339    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:05.912244    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:05.912244    6184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 11:24:06.073724    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 11:24:06.073724    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:08.229437    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:08.229755    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:08.229755    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:10.819459    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:10.819459    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:10.825018    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:10.825726    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:10.825726    6184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 11:24:10.978172    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 11:24:10.978172    6184 machine.go:91] provisioned docker machine in 38.4947703s
	I0122 11:24:10.978172    6184 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
	I0122 11:24:10.978172    6184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 11:24:10.991666    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 11:24:10.991666    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:13.158955    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:13.159108    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:13.159108    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:15.702276    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:15.702276    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:15.702816    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:15.809972    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.818285s)
	I0122 11:24:15.825715    6184 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 11:24:15.831293    6184 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 11:24:15.831293    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 11:24:15.831873    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 11:24:15.833289    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 11:24:15.834521    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
	I0122 11:24:15.845569    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
	I0122 11:24:15.860176    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 11:24:15.898930    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
	I0122 11:24:15.935785    6184 start.go:303] post-start completed in 4.9575914s
	I0122 11:24:15.935912    6184 fix.go:56] fixHost completed within 46.1604801s
	I0122 11:24:15.935989    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:18.084871    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:18.085048    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:18.085048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:20.652032    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:20.652032    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:20.657685    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:20.658373    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:20.658373    6184 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 11:24:20.786672    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705922660.787395205
	
	I0122 11:24:20.786672    6184 fix.go:206] guest clock: 1705922660.787395205
	I0122 11:24:20.786672    6184 fix.go:219] Guest: 2024-01-22 11:24:20.787395205 +0000 UTC Remote: 2024-01-22 11:24:15.9359125 +0000 UTC m=+51.799138601 (delta=4.851482705s)
	I0122 11:24:20.786785    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:22.947432    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:22.947432    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:22.947617    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:25.472703    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:25.472703    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:25.478467    6184 main.go:141] libmachine: Using SSH client type: native
	I0122 11:24:25.479226    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
	I0122 11:24:25.479226    6184 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705922660
	I0122 11:24:25.631726    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 11:24:20 UTC 2024
	
	I0122 11:24:25.631726    6184 fix.go:226] clock set: Mon Jan 22 11:24:20 UTC 2024
	 (err=<nil>)
	I0122 11:24:25.631726    6184 start.go:83] releasing machines lock for "functional-969800", held for 55.8572498s
	I0122 11:24:25.632332    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:27.715614    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:27.715614    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:27.715955    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:30.263943    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:30.264303    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:30.269166    6184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 11:24:30.269244    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:30.283449    6184 ssh_runner.go:195] Run: cat /version.json
	I0122 11:24:30.283449    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:32.445469    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:32.476203    6184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 11:24:32.476319    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:32.476319    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
	I0122 11:24:35.056514    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:35.056514    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:35.056514    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:35.103081    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59
	
	I0122 11:24:35.103081    6184 main.go:141] libmachine: [stderr =====>] : 
	I0122 11:24:35.103551    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
	I0122 11:24:45.230724    6184 ssh_runner.go:235] Completed: cat /version.json: (14.947208s)
	I0122 11:24:45.230948    6184 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (14.9616545s)
	W0122 11:24:45.230948    6184 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0122 11:24:45.230948    6184 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0122 11:24:45.230948    6184 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0122 11:24:45.247121    6184 ssh_runner.go:195] Run: systemctl --version
	I0122 11:24:45.268459    6184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 11:24:45.276863    6184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 11:24:45.289111    6184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 11:24:45.302602    6184 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0122 11:24:45.302602    6184 start.go:475] detecting cgroup driver to use...
	I0122 11:24:45.302987    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 11:24:45.344679    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 11:24:45.376694    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 11:24:45.391228    6184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 11:24:45.403401    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 11:24:45.434227    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 11:24:45.467889    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 11:24:45.495529    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 11:24:45.526566    6184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 11:24:45.555545    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 11:24:45.586280    6184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 11:24:45.619757    6184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 11:24:45.651490    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 11:24:45.826436    6184 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 11:24:45.853776    6184 start.go:475] detecting cgroup driver to use...
	I0122 11:24:45.866418    6184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 11:24:45.900041    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 11:24:45.933396    6184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 11:24:45.974943    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 11:24:46.005340    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 11:24:46.026136    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 11:24:46.068325    6184 ssh_runner.go:195] Run: which cri-dockerd
	I0122 11:24:46.086267    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 11:24:46.099080    6184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 11:24:46.138821    6184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 11:24:46.316107    6184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 11:24:46.478342    6184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 11:24:46.478342    6184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 11:24:46.517039    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 11:24:46.692069    6184 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 11:26:11.896948    6184 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m25.2043524s)
	I0122 11:26:11.910605    6184 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 11:26:11.977390    6184 out.go:177] 
	W0122 11:26:11.978052    6184 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:26:11 UTC. --
	Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
	Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
	Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
	Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
	Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
	Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
	Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
	Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 22 11:00:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:00:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:00:05 functional-969800 dockerd[5820]: time="2024-01-22T11:00:05.817795905Z" level=info msg="Starting up"
	Jan 22 11:01:05 functional-969800 dockerd[5820]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:01:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:01:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:01:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:01:06 functional-969800 dockerd[6034]: time="2024-01-22T11:01:06.042578101Z" level=info msg="Starting up"
	Jan 22 11:02:06 functional-969800 dockerd[6034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jan 22 11:02:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:02:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:02:06 functional-969800 dockerd[6195]: time="2024-01-22T11:02:06.252465592Z" level=info msg="Starting up"
	Jan 22 11:03:06 functional-969800 dockerd[6195]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:03:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jan 22 11:03:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:03:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:03:06 functional-969800 dockerd[6342]: time="2024-01-22T11:03:06.467155089Z" level=info msg="Starting up"
	Jan 22 11:04:06 functional-969800 dockerd[6342]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:04:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jan 22 11:04:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:04:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:04:06 functional-969800 dockerd[6610]: time="2024-01-22T11:04:06.803106897Z" level=info msg="Starting up"
	Jan 22 11:05:06 functional-969800 dockerd[6610]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:05:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jan 22 11:05:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:05:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:05:07 functional-969800 dockerd[6768]: time="2024-01-22T11:05:07.038180841Z" level=info msg="Starting up"
	Jan 22 11:06:07 functional-969800 dockerd[6768]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:06:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jan 22 11:06:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:06:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:06:07 functional-969800 dockerd[6963]: time="2024-01-22T11:06:07.289156413Z" level=info msg="Starting up"
	Jan 22 11:07:07 functional-969800 dockerd[6963]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:07:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jan 22 11:07:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:07:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:07:07 functional-969800 dockerd[7116]: time="2024-01-22T11:07:07.566278979Z" level=info msg="Starting up"
	Jan 22 11:08:07 functional-969800 dockerd[7116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:08:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jan 22 11:08:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:08:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:08:07 functional-969800 dockerd[7273]: time="2024-01-22T11:08:07.800042513Z" level=info msg="Starting up"
	Jan 22 11:09:07 functional-969800 dockerd[7273]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:09:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jan 22 11:09:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:09:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:09:08 functional-969800 dockerd[7420]: time="2024-01-22T11:09:08.055663358Z" level=info msg="Starting up"
	Jan 22 11:10:08 functional-969800 dockerd[7420]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:10:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jan 22 11:10:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:10:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:10:08 functional-969800 dockerd[7582]: time="2024-01-22T11:10:08.298276757Z" level=info msg="Starting up"
	Jan 22 11:11:08 functional-969800 dockerd[7582]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:11:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jan 22 11:11:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:11:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:11:08 functional-969800 dockerd[7730]: time="2024-01-22T11:11:08.568515683Z" level=info msg="Starting up"
	Jan 22 11:12:08 functional-969800 dockerd[7730]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:12:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jan 22 11:12:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:12:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:12:08 functional-969800 dockerd[7901]: time="2024-01-22T11:12:08.793897827Z" level=info msg="Starting up"
	Jan 22 11:13:08 functional-969800 dockerd[7901]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:13:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jan 22 11:13:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:13:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:13:09 functional-969800 dockerd[8084]: time="2024-01-22T11:13:09.049002906Z" level=info msg="Starting up"
	Jan 22 11:14:09 functional-969800 dockerd[8084]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:14:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jan 22 11:14:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:14:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:14:09 functional-969800 dockerd[8262]: time="2024-01-22T11:14:09.310111876Z" level=info msg="Starting up"
	Jan 22 11:15:09 functional-969800 dockerd[8262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:15:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jan 22 11:15:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:15:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:15:09 functional-969800 dockerd[8424]: time="2024-01-22T11:15:09.550451227Z" level=info msg="Starting up"
	Jan 22 11:16:09 functional-969800 dockerd[8424]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:16:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jan 22 11:16:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:16:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:16:09 functional-969800 dockerd[8663]: time="2024-01-22T11:16:09.793536981Z" level=info msg="Starting up"
	Jan 22 11:17:09 functional-969800 dockerd[8663]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:17:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jan 22 11:17:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:17:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:17:09 functional-969800 dockerd[8816]: time="2024-01-22T11:17:09.993681624Z" level=info msg="Starting up"
	Jan 22 11:18:10 functional-969800 dockerd[8816]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:18:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jan 22 11:18:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:18:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:18:10 functional-969800 dockerd[8970]: time="2024-01-22T11:18:10.219717628Z" level=info msg="Starting up"
	Jan 22 11:19:10 functional-969800 dockerd[8970]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:19:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jan 22 11:19:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:19:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:19:10 functional-969800 dockerd[9236]: time="2024-01-22T11:19:10.567700970Z" level=info msg="Starting up"
	Jan 22 11:20:10 functional-969800 dockerd[9236]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:20:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jan 22 11:20:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:20:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:20:10 functional-969800 dockerd[9393]: time="2024-01-22T11:20:10.785376681Z" level=info msg="Starting up"
	Jan 22 11:21:10 functional-969800 dockerd[9393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:21:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jan 22 11:21:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:21:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:21:11 functional-969800 dockerd[9646]: time="2024-01-22T11:21:11.049728847Z" level=info msg="Starting up"
	Jan 22 11:22:11 functional-969800 dockerd[9646]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:22:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jan 22 11:22:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:22:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:22:11 functional-969800 dockerd[9809]: time="2024-01-22T11:22:11.276508964Z" level=info msg="Starting up"
	Jan 22 11:23:11 functional-969800 dockerd[9809]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:23:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jan 22 11:23:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:23:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:23:11 functional-969800 dockerd[9961]: time="2024-01-22T11:23:11.489383259Z" level=info msg="Starting up"
	Jan 22 11:24:11 functional-969800 dockerd[9961]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:24:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jan 22 11:24:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:24:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:24:11 functional-969800 dockerd[10291]: time="2024-01-22T11:24:11.800722506Z" level=info msg="Starting up"
	Jan 22 11:24:46 functional-969800 dockerd[10291]: time="2024-01-22T11:24:46.714262322Z" level=info msg="Processing signal 'terminated'"
	Jan 22 11:25:11 functional-969800 dockerd[10291]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:25:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:25:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:25:11 functional-969800 dockerd[10628]: time="2024-01-22T11:25:11.884976767Z" level=info msg="Starting up"
	Jan 22 11:26:11 functional-969800 dockerd[10628]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:26:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 11:26:11.979558    6184 out.go:239] * 
	W0122 11:26:11.981138    6184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 11:26:11.982705    6184 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:29:12 UTC. --
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:27:12 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:27:12 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:27:12Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:27:12 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 22 11:27:12 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:27:12 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:27:12 functional-969800 dockerd[10994]: time="2024-01-22T11:27:12.476439297Z" level=info msg="Starting up"
	Jan 22 11:28:12 functional-969800 dockerd[10994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:28:12 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:28:12 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:28:12 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:28:12 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:28:12Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:28:12 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jan 22 11:28:12 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:28:12 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 11:28:12 functional-969800 dockerd[11150]: time="2024-01-22T11:28:12.693529119Z" level=info msg="Starting up"
	Jan 22 11:29:12 functional-969800 dockerd[11150]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 11:29:12 functional-969800 cri-dockerd[1222]: time="2024-01-22T11:29:12Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 22 11:29:12 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 11:29:12 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 11:29:12 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 22 11:29:12 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jan 22 11:29:12 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 11:29:12 functional-969800 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[Jan22 10:55] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.139168] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[Jan22 10:56] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.548146] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.177021] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.192499] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[  +1.326236] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405236] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.162074] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.187859] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.249478] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +9.926271] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +5.555073] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.934437] systemd-fstab-generator[1705]: Ignoring "noauto" for root device
	[  +0.641554] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.684973] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[Jan22 10:57] kauditd_printk_skb: 21 callbacks suppressed
	[Jan22 10:58] systemd-fstab-generator[5187]: Ignoring "noauto" for root device
	[  +0.564007] systemd-fstab-generator[5225]: Ignoring "noauto" for root device
	[  +0.215913] systemd-fstab-generator[5236]: Ignoring "noauto" for root device
	[  +0.280562] systemd-fstab-generator[5249]: Ignoring "noauto" for root device
	[Jan22 11:24] systemd-fstab-generator[10504]: Ignoring "noauto" for root device
	[  +0.487513] systemd-fstab-generator[10540]: Ignoring "noauto" for root device
	[  +0.178599] systemd-fstab-generator[10555]: Ignoring "noauto" for root device
	[  +0.195768] systemd-fstab-generator[10569]: Ignoring "noauto" for root device
	
	
	==> kernel <==
	 11:30:13 up 35 min,  0 users,  load average: 0.04, 0.04, 0.02
	Linux functional-969800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:30:13 UTC. --
	Jan 22 11:30:05 functional-969800 kubelet[2697]: E0122 11:30:05.560312    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:30:08 functional-969800 kubelet[2697]: E0122 11:30:08.888412    2697 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m14.970739558s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.892523    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?resourceVersion=0&timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.893038    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.893289    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.893461    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.893836    2697 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-969800\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused"
	Jan 22 11:30:10 functional-969800 kubelet[2697]: E0122 11:30:10.893944    2697 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 22 11:30:12 functional-969800 kubelet[2697]: E0122 11:30:12.562319    2697 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-969800?timeout=10s\": dial tcp 172.23.182.59:8441: connect: connection refused" interval="7s"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.035544    2697 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.035802    2697 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.036262    2697 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.041487    2697 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.041519    2697 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: I0122 11:30:13.041534    2697 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.041782    2697 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.042003    2697 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.042137    2697 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.043180    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.043467    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.044073    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.044241    2697 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.046138    2697 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.046250    2697 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 22 11:30:13 functional-969800 kubelet[2697]: E0122 11:30:13.046482    2697 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:28:47.554368   13604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:29:12.707674   13604 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.741504   13604 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.773537   13604 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.804733   13604 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.836943   13604 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.870239   13604 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.904366   13604 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:29:12.936129   13604 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:30:13.039793   13604 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-22T11:29:15Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-22T11:29:15Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0122 11:30:13.142090   13604 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-969800 -n functional-969800: exit status 2 (12.0770872s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:30:13.723166    8808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-969800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (120.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (93.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1263400204\001\logs.txt
E0122 11:33:23.959327    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-969800 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1263400204\001\logs.txt: exit status 1 (1m32.9114299s)

                                                
                                                
** stderr ** 
	W0122 11:32:14.072333   14284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 11:33:13.799796   14284 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:33:13.834864   14284 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:33:13.867449   14284 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:33:13.902468   14284 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:33:13.940252   14284 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0122 11:33:13.973989   14284 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
functional_test.go:1248: out/minikube-windows-amd64.exe -p functional-969800 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1263400204\001\logs.txt failed: exit status 1
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| delete  | -p download-only-792600                                                                     | download-only-792600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
| start   | --download-only -p                                                                          | binary-mirror-350600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
|         | binary-mirror-350600                                                                        |                      |                   |         |                     |                     |
|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
|         | http://127.0.0.1:54978                                                                      |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| delete  | -p binary-mirror-350600                                                                     | binary-mirror-350600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
| addons  | enable dashboard -p                                                                         | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| start   | -p addons-760600 --wait=true                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:43 UTC |
|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
| addons  | addons-760600 addons                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ssh     | addons-760600 ssh cat                                                                       | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
|         | /opt/local-path-provisioner/pvc-957c413a-deb5-4049-8cba-7fb764975890_default_test-pvc/file1 |                      |                   |         |                     |                     |
| ip      | addons-760600 ip                                                                            | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:43 UTC |
|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable cloud-spanner -p                                                                    | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:44 UTC |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| addons  | enable headlamp                                                                             | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:43 UTC | 22 Jan 24 10:44 UTC |
|         | -p addons-760600                                                                            |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable nvidia-device-plugin                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:44 UTC |
|         | -p addons-760600                                                                            |                      |                   |         |                     |                     |
| addons  | addons-760600 addons                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:44 UTC |
|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable inspektor-gadget -p                                                                 | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| ssh     | addons-760600 ssh curl -s                                                                   | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:44 UTC |
|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-760600 addons                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ip      | addons-760600 ip                                                                            | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:44 UTC | 22 Jan 24 10:45 UTC |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:45 UTC |
|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| addons  | addons-760600 addons disable                                                                | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:45 UTC | 22 Jan 24 10:46 UTC |
|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| stop    | -p addons-760600                                                                            | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:46 UTC |
| addons  | enable dashboard -p                                                                         | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:46 UTC | 22 Jan 24 10:47 UTC |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| addons  | disable gvisor -p                                                                           | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
|         | addons-760600                                                                               |                      |                   |         |                     |                     |
| delete  | -p addons-760600                                                                            | addons-760600        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:47 UTC |
| start   | -p nospam-526800 -n=1 --memory=2250 --wait=false                                            | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:47 UTC | 22 Jan 24 10:50 UTC |
|         | --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                       |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| start   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:50 UTC |                     |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC |                     |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| pause   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:51 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| unpause | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:52 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| stop    | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:52 UTC | 22 Jan 24 10:53 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-526800 --log_dir                                                                     | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| delete  | -p nospam-526800                                                                            | nospam-526800        | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:53 UTC |
| start   | -p functional-969800                                                                        | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:53 UTC | 22 Jan 24 10:57 UTC |
|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
| start   | -p functional-969800                                                                        | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:57 UTC |                     |
|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
| cache   | functional-969800 cache add                                                                 | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:05 UTC | 22 Jan 24 11:07 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | functional-969800 cache add                                                                 | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:07 UTC | 22 Jan 24 11:09 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | functional-969800 cache add                                                                 | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:09 UTC | 22 Jan 24 11:11 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-969800 cache add                                                                 | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:11 UTC | 22 Jan 24 11:12 UTC |
|         | minikube-local-cache-test:functional-969800                                                 |                      |                   |         |                     |                     |
| cache   | functional-969800 cache delete                                                              | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
|         | minikube-local-cache-test:functional-969800                                                 |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | list                                                                                        | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC | 22 Jan 24 11:12 UTC |
| ssh     | functional-969800 ssh sudo                                                                  | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
|         | crictl images                                                                               |                      |                   |         |                     |                     |
| ssh     | functional-969800                                                                           | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:12 UTC |                     |
|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| ssh     | functional-969800 ssh                                                                       | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-969800 cache reload                                                              | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:13 UTC | 22 Jan 24 11:15 UTC |
| ssh     | functional-969800 ssh                                                                       | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:15 UTC | 22 Jan 24 11:15 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| kubectl | functional-969800 kubectl --                                                                | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:18 UTC |                     |
|         | --context functional-969800                                                                 |                      |                   |         |                     |                     |
|         | get pods                                                                                    |                      |                   |         |                     |                     |
| start   | -p functional-969800                                                                        | functional-969800    | minikube7\jenkins | v1.32.0 | 22 Jan 24 11:23 UTC |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/01/22 11:23:24
Running on machine: minikube7
Binary: Built with gc go1.21.6 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0122 11:23:24.307882    6184 out.go:296] Setting OutFile to fd 720 ...
I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0122 11:23:24.309149    6184 out.go:309] Setting ErrFile to fd 752...
I0122 11:23:24.309149    6184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0122 11:23:24.329584    6184 out.go:303] Setting JSON to false
I0122 11:23:24.332927    6184 start.go:128] hostinfo: {"hostname":"minikube7","uptime":41973,"bootTime":1705880631,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
W0122 11:23:24.332998    6184 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0122 11:23:24.334000    6184 out.go:177] * [functional-969800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
I0122 11:23:24.334878    6184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
I0122 11:23:24.335648    6184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0122 11:23:24.334878    6184 notify.go:220] Checking for updates...
I0122 11:23:24.336190    6184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
I0122 11:23:24.338931    6184 out.go:177]   - MINIKUBE_LOCATION=18014
I0122 11:23:24.340085    6184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0122 11:23:24.341323    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0122 11:23:24.341323    6184 driver.go:392] Setting default libvirt URI to qemu:///system
I0122 11:23:29.718284    6184 out.go:177] * Using the hyperv driver based on existing profile
I0122 11:23:29.718955    6184 start.go:298] selected driver: hyperv
I0122 11:23:29.718955    6184 start.go:902] validating driver "hyperv" against &{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0122 11:23:29.719194    6184 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0122 11:23:29.768294    6184 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0122 11:23:29.768294    6184 cni.go:84] Creating CNI manager for ""
I0122 11:23:29.768294    6184 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0122 11:23:29.768294    6184 start_flags.go:321] config:
{Name:functional-969800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-969800 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.23.182.59 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0122 11:23:29.769226    6184 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0122 11:23:29.770231    6184 out.go:177] * Starting control plane node functional-969800 in cluster functional-969800
I0122 11:23:29.771234    6184 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0122 11:23:29.771234    6184 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0122 11:23:29.771234    6184 cache.go:56] Caching tarball of preloaded images
I0122 11:23:29.771234    6184 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0122 11:23:29.772224    6184 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I0122 11:23:29.772224    6184 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-969800\config.json ...
I0122 11:23:29.774224    6184 start.go:365] acquiring machines lock for functional-969800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0122 11:23:29.774224    6184 start.go:369] acquired machines lock for "functional-969800" in 0s
I0122 11:23:29.775224    6184 start.go:96] Skipping create...Using existing machine configuration
I0122 11:23:29.775224    6184 fix.go:54] fixHost starting: 
I0122 11:23:29.775224    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:32.481029    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:32.481029    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:32.481029    6184 fix.go:102] recreateIfNeeded on functional-969800: state=Running err=<nil>
W0122 11:23:32.481029    6184 fix.go:128] unexpected machine state, will restart: <nil>
I0122 11:23:32.482168    6184 out.go:177] * Updating the running hyperv "functional-969800" VM ...
I0122 11:23:32.483228    6184 machine.go:88] provisioning docker machine ...
I0122 11:23:32.483228    6184 buildroot.go:166] provisioning hostname "functional-969800"
I0122 11:23:32.483314    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:34.645849    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:34.646010    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:34.646127    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:23:37.134046    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:23:37.134046    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:37.140870    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:23:37.141572    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:23:37.141572    6184 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-969800 && echo "functional-969800" | sudo tee /etc/hostname
I0122 11:23:37.301806    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-969800

                                                
                                                
I0122 11:23:37.301806    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:39.427005    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:39.427005    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:39.427118    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:23:41.953477    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:23:41.953542    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:41.959195    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:23:41.959820    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:23:41.959820    6184 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sfunctional-969800' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-969800/g' /etc/hosts;
			else 
				echo '127.0.1.1 functional-969800' | sudo tee -a /etc/hosts; 
			fi
		fi
I0122 11:23:42.101406    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0122 11:23:42.101406    6184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
I0122 11:23:42.101975    6184 buildroot.go:174] setting up certificates
I0122 11:23:42.102048    6184 provision.go:83] configureAuth start
I0122 11:23:42.102048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:44.258658    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:44.258658    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:44.258841    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:23:46.770030    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:23:46.770030    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:46.770279    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:48.845024    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:48.845024    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:48.845024    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:23:51.359173    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:23:51.359173    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:51.359173    6184 provision.go:138] copyHostCerts
I0122 11:23:51.359418    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
I0122 11:23:51.359418    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
I0122 11:23:51.360221    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
I0122 11:23:51.361961    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
I0122 11:23:51.361961    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
I0122 11:23:51.362031    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
I0122 11:23:51.363530    6184 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
I0122 11:23:51.363530    6184 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
I0122 11:23:51.363530    6184 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
I0122 11:23:51.364934    6184 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-969800 san=[172.23.182.59 172.23.182.59 localhost 127.0.0.1 minikube functional-969800]
I0122 11:23:51.531845    6184 provision.go:172] copyRemoteCerts
I0122 11:23:51.548094    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0122 11:23:51.548094    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:53.701592    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:53.701592    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:53.701592    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:23:56.185478    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:23:56.185774    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:56.186223    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
I0122 11:23:56.297102    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7489862s)
I0122 11:23:56.297740    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0122 11:23:56.335241    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0122 11:23:56.374000    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0122 11:23:56.413292    6184 provision.go:86] duration metric: configureAuth took 14.3111796s
I0122 11:23:56.413292    6184 buildroot.go:189] setting minikube options for container-runtime
I0122 11:23:56.413292    6184 config.go:182] Loaded profile config "functional-969800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0122 11:23:56.413869    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:23:58.538141    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:23:58.538141    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:23:58.538141    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:01.056353    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:01.056452    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:01.061778    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:24:01.062464    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:24:01.062464    6184 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0122 11:24:01.204273    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0122 11:24:01.204273    6184 buildroot.go:70] root file system type: tmpfs
I0122 11:24:01.204617    6184 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0122 11:24:01.204668    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:03.346406    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:03.346406    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:03.346406    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:05.905128    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:05.905128    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:05.911339    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:24:05.912244    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:24:05.912244    6184 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0122 11:24:06.073724    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0122 11:24:06.073724    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:08.229437    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:08.229755    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:08.229755    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:10.819459    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:10.819459    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:10.825018    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:24:10.825726    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:24:10.825726    6184 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0122 11:24:10.978172    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0122 11:24:10.978172    6184 machine.go:91] provisioned docker machine in 38.4947703s
I0122 11:24:10.978172    6184 start.go:300] post-start starting for "functional-969800" (driver="hyperv")
I0122 11:24:10.978172    6184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0122 11:24:10.991666    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0122 11:24:10.991666    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:13.158955    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:13.159108    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:13.159108    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:15.702276    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:15.702276    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:15.702816    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
I0122 11:24:15.809972    6184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.818285s)
I0122 11:24:15.825715    6184 ssh_runner.go:195] Run: cat /etc/os-release
I0122 11:24:15.831293    6184 info.go:137] Remote host: Buildroot 2021.02.12
I0122 11:24:15.831293    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
I0122 11:24:15.831873    6184 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
I0122 11:24:15.833289    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
I0122 11:24:15.834521    6184 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts -> hosts in /etc/test/nested/copy/6396
I0122 11:24:15.845569    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/6396
I0122 11:24:15.860176    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
I0122 11:24:15.898930    6184 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts --> /etc/test/nested/copy/6396/hosts (40 bytes)
I0122 11:24:15.935785    6184 start.go:303] post-start completed in 4.9575914s
I0122 11:24:15.935912    6184 fix.go:56] fixHost completed within 46.1604801s
I0122 11:24:15.935989    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:18.084871    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:18.085048    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:18.085048    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:20.652032    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:20.652032    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:20.657685    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:24:20.658373    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:24:20.658373    6184 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0122 11:24:20.786672    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705922660.787395205

                                                
                                                
I0122 11:24:20.786672    6184 fix.go:206] guest clock: 1705922660.787395205
I0122 11:24:20.786672    6184 fix.go:219] Guest: 2024-01-22 11:24:20.787395205 +0000 UTC Remote: 2024-01-22 11:24:15.9359125 +0000 UTC m=+51.799138601 (delta=4.851482705s)
I0122 11:24:20.786785    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:22.947432    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:22.947432    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:22.947617    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:25.472703    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:25.472703    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:25.478467    6184 main.go:141] libmachine: Using SSH client type: native
I0122 11:24:25.479226    6184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.182.59 22 <nil> <nil>}
I0122 11:24:25.479226    6184 main.go:141] libmachine: About to run SSH command:
sudo date -s @1705922660
I0122 11:24:25.631726    6184 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 11:24:20 UTC 2024

                                                
                                                
I0122 11:24:25.631726    6184 fix.go:226] clock set: Mon Jan 22 11:24:20 UTC 2024
(err=<nil>)
I0122 11:24:25.631726    6184 start.go:83] releasing machines lock for "functional-969800", held for 55.8572498s
I0122 11:24:25.632332    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:27.715614    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:27.715614    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:27.715955    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:30.263943    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:30.264303    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:30.269166    6184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0122 11:24:30.269244    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:30.283449    6184 ssh_runner.go:195] Run: cat /version.json
I0122 11:24:30.283449    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-969800 ).state
I0122 11:24:32.445469    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:32.445469    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:32.445469    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:32.476203    6184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0122 11:24:32.476319    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:32.476319    6184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-969800 ).networkadapters[0]).ipaddresses[0]
I0122 11:24:35.056514    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:35.056514    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:35.056514    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
I0122 11:24:35.103081    6184 main.go:141] libmachine: [stdout =====>] : 172.23.182.59

                                                
                                                
I0122 11:24:35.103081    6184 main.go:141] libmachine: [stderr =====>] : 
I0122 11:24:35.103551    6184 sshutil.go:53] new ssh client: &{IP:172.23.182.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-969800\id_rsa Username:docker}
I0122 11:24:45.230724    6184 ssh_runner.go:235] Completed: cat /version.json: (14.947208s)
I0122 11:24:45.230948    6184 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (14.9616545s)
W0122 11:24:45.230948    6184 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
stdout:

                                                
                                                
stderr:
curl: (28) Resolving timed out after 2000 milliseconds
W0122 11:24:45.230948    6184 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
W0122 11:24:45.230948    6184 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0122 11:24:45.247121    6184 ssh_runner.go:195] Run: systemctl --version
I0122 11:24:45.268459    6184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0122 11:24:45.276863    6184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0122 11:24:45.289111    6184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0122 11:24:45.302602    6184 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0122 11:24:45.302602    6184 start.go:475] detecting cgroup driver to use...
I0122 11:24:45.302987    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0122 11:24:45.344679    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0122 11:24:45.376694    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0122 11:24:45.391228    6184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0122 11:24:45.403401    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0122 11:24:45.434227    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0122 11:24:45.467889    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0122 11:24:45.495529    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0122 11:24:45.526566    6184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0122 11:24:45.555545    6184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0122 11:24:45.586280    6184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0122 11:24:45.619757    6184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0122 11:24:45.651490    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0122 11:24:45.826436    6184 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0122 11:24:45.853776    6184 start.go:475] detecting cgroup driver to use...
I0122 11:24:45.866418    6184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0122 11:24:45.900041    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0122 11:24:45.933396    6184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0122 11:24:45.974943    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0122 11:24:46.005340    6184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0122 11:24:46.026136    6184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0122 11:24:46.068325    6184 ssh_runner.go:195] Run: which cri-dockerd
I0122 11:24:46.086267    6184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0122 11:24:46.099080    6184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0122 11:24:46.138821    6184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0122 11:24:46.316107    6184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0122 11:24:46.478342    6184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0122 11:24:46.478342    6184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0122 11:24:46.517039    6184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0122 11:24:46.692069    6184 ssh_runner.go:195] Run: sudo systemctl restart docker
I0122 11:26:11.896948    6184 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m25.2043524s)
I0122 11:26:11.910605    6184 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0122 11:26:11.977390    6184 out.go:177] 
W0122 11:26:11.978052    6184 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
-- Journal begins at Mon 2024-01-22 10:54:50 UTC, ends at Mon 2024-01-22 11:26:11 UTC. --
Jan 22 10:55:41 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.640172956Z" level=info msg="Starting up"
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.641196772Z" level=info msg="containerd not running, starting managed containerd"
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.643170703Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=696
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.681307999Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711852976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.711950178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714822923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.714949725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715319930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715475833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715577834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715725437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715815938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.715939840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716415947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716515049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716533749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716692852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716783253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716856454Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.716943756Z" level=info msg="metadata content store policy set" policy=shared
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731298280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731388081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731408582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731467083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731484883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731498383Z" level=info msg="NRI interface is disabled by configuration."
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731528784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731677686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731783888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731806788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731822488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731838588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731859089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731890989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731924790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731943290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731959290Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731973491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.731992491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732288795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732814904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732879305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732900805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.732940506Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733012907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733046407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733235810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733295711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733311411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733344312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733358612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733373012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733387713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733473314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733594716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733615016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733643517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733664217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733681917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733711618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733724518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733749418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733762418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.733774019Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734197325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734365128Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734471630Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 22 10:55:41 functional-969800 dockerd[696]: time="2024-01-22T10:55:41.734496030Z" level=info msg="containerd successfully booted in 0.054269s"
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.768676964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.783849401Z" level=info msg="Loading containers: start."
Jan 22 10:55:41 functional-969800 dockerd[690]: time="2024-01-22T10:55:41.997712742Z" level=info msg="Loading containers: done."
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022214804Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022261705Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022274505Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022281005Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022301806Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.022416907Z" level=info msg="Daemon has completed initialization"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083652704Z" level=info msg="API listen on /var/run/docker.sock"
Jan 22 10:55:42 functional-969800 dockerd[690]: time="2024-01-22T10:55:42.083926108Z" level=info msg="API listen on [::]:2376"
Jan 22 10:55:42 functional-969800 systemd[1]: Started Docker Application Container Engine.
Jan 22 10:56:12 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.621957907Z" level=info msg="Processing signal 'terminated'"
Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623204307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623373807Z" level=info msg="Daemon shutdown complete"
Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623417807Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jan 22 10:56:12 functional-969800 dockerd[690]: time="2024-01-22T10:56:12.623429307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jan 22 10:56:13 functional-969800 systemd[1]: docker.service: Succeeded.
Jan 22 10:56:13 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 10:56:13 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.697995107Z" level=info msg="Starting up"
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.699240007Z" level=info msg="containerd not running, starting managed containerd"
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.700630907Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.732892407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759794007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.759947407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.762945807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763156107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763474707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.763800907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764170707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764208607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764225307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764251107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764439107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764536207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764552707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764742807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764836407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764867007Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.764897007Z" level=info msg="metadata content store policy set" policy=shared
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765451307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765545607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765570407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765599407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765631407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765670507Z" level=info msg="NRI interface is disabled by configuration."
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765715707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765770507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765918107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765954407Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.765992607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766011607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766135807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766161807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766178607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766211907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766239507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766258407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766273107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.766317107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767449407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767559907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767585207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767614707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767679407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767698307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767713707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767728107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767743307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767759107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767773607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767790107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767807907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767846207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767884107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767901707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767917307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767934707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767953007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.767985707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768000707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768018307Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768110907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.768150607Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770293407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770589207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770747207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 22 10:56:13 functional-969800 dockerd[1031]: time="2024-01-22T10:56:13.770779907Z" level=info msg="containerd successfully booted in 0.039287s"
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.796437607Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.812301607Z" level=info msg="Loading containers: start."
Jan 22 10:56:13 functional-969800 dockerd[1025]: time="2024-01-22T10:56:13.959771407Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.025356107Z" level=info msg="Loading containers: done."
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043938307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043967507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043976207Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.043982907Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044002007Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.044081507Z" level=info msg="Daemon has completed initialization"
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.084793407Z" level=info msg="API listen on /var/run/docker.sock"
Jan 22 10:56:14 functional-969800 systemd[1]: Started Docker Application Container Engine.
Jan 22 10:56:14 functional-969800 dockerd[1025]: time="2024-01-22T10:56:14.087289507Z" level=info msg="API listen on [::]:2376"
Jan 22 10:56:24 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.888530207Z" level=info msg="Processing signal 'terminated'"
Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890215407Z" level=info msg="Daemon shutdown complete"
Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890260307Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890587307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Jan 22 10:56:24 functional-969800 dockerd[1025]: time="2024-01-22T10:56:24.890740407Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jan 22 10:56:25 functional-969800 systemd[1]: docker.service: Succeeded.
Jan 22 10:56:25 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 10:56:25 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.965408907Z" level=info msg="Starting up"
Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.966518207Z" level=info msg="containerd not running, starting managed containerd"
Jan 22 10:56:25 functional-969800 dockerd[1331]: time="2024-01-22T10:56:25.967588307Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1337
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.001795407Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035509507Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.035631707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038165807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038334707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038628507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038744407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038816807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038858807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038871007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.038894707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039248607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039359407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039378107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039547707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039641707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039668207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039685807Z" level=info msg="metadata content store policy set" policy=shared
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039841507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.039933007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040079807Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040230207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040269807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040281707Z" level=info msg="NRI interface is disabled by configuration."
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040294707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040361507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040377307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040391107Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040409607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040429307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040449707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040463607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040482307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040514007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040527507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040540307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040551607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.040608107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042558207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042674107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042695907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042734707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042787207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042937207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042958307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042974407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.042993607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043009507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043463207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043558107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043662307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043706107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043727707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043761007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043778507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043801907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043893407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043920207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043942907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043958907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.043974107Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044581807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044705407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044803207Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 22 10:56:26 functional-969800 dockerd[1337]: time="2024-01-22T10:56:26.044899507Z" level=info msg="containerd successfully booted in 0.045219s"
Jan 22 10:56:26 functional-969800 dockerd[1331]: time="2024-01-22T10:56:26.371254407Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.294692607Z" level=info msg="Loading containers: start."
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.459462607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.525291207Z" level=info msg="Loading containers: done."
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544516307Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544552907Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544562107Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544572107Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544597307Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.544708807Z" level=info msg="Daemon has completed initialization"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.576971407Z" level=info msg="API listen on [::]:2376"
Jan 22 10:56:30 functional-969800 dockerd[1331]: time="2024-01-22T10:56:30.577092907Z" level=info msg="API listen on /var/run/docker.sock"
Jan 22 10:56:30 functional-969800 systemd[1]: Started Docker Application Container Engine.
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643012417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643161510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643191508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.643608487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648436541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648545436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648613232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.648628031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650622030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650800621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650820520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.650831019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675561460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.675860444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676066234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:37 functional-969800 dockerd[1337]: time="2024-01-22T10:56:37.676182428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627214283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627479370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627708859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.627925149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.661590442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665215768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.665413259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.667762047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.837739331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838092414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838119012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:38 functional-969800 dockerd[1337]: time="2024-01-22T10:56:38.838132512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029726252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029885345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029907544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:39 functional-969800 dockerd[1337]: time="2024-01-22T10:56:39.029921943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442471250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442551949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.442603448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.636810797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637124693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637309591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:58 functional-969800 dockerd[1337]: time="2024-01-22T10:56:58.637408590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.128989239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129139537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129211936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:56:59 functional-969800 dockerd[1337]: time="2024-01-22T10:56:59.129226336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.005359252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.018527500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020526276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:57:00 functional-969800 dockerd[1337]: time="2024-01-22T10:57:00.020768074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531424444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531481843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531496243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:57:06 functional-969800 dockerd[1337]: time="2024-01-22T10:57:06.531507943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.163527770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.164982859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165023859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 22 10:57:07 functional-969800 dockerd[1337]: time="2024-01-22T10:57:07.165340756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 22 10:58:54 functional-969800 systemd[1]: Stopping Docker Application Container Engine...
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.271367222Z" level=info msg="Processing signal 'terminated'"
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.408491526Z" level=info msg="ignoring event" container=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.408935427Z" level=info msg="shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409367629Z" level=warning msg="cleaning up after shim disconnected" id=73ac761b6d10ddea612aedb2a27df75c95389f807b3af75beb11fc89a69ac40c namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.409386829Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482131396Z" level=info msg="shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482242197Z" level=warning msg="cleaning up after shim disconnected" id=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.482643498Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.485265908Z" level=info msg="ignoring event" container=946767de576920ef8e82992bd09d6be1d2c3703a39d9a51d31022e98449f28cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.514982317Z" level=info msg="ignoring event" container=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.517274025Z" level=info msg="shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518024528Z" level=warning msg="cleaning up after shim disconnected" id=9dadd45c2c6d1fe9d27aefd562d6392df02ffef373acae3b1a7f80cadd2b195a namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.518088128Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.533465185Z" level=info msg="ignoring event" container=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533775786Z" level=info msg="shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533878986Z" level=warning msg="cleaning up after shim disconnected" id=4e7c9405d323a1cac97dc72eb0ab42603d6dced2da6fdc9fa4ab2e46778d16c0 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.533896387Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.541816616Z" level=info msg="ignoring event" container=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.544778327Z" level=info msg="shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545052128Z" level=warning msg="cleaning up after shim disconnected" id=e7eef78cd7b5f3064257dbe4f12b967a47ede476ceeeab93c2ac94b7809a4796 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.545071928Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.564756500Z" level=info msg="ignoring event" container=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.566095405Z" level=info msg="shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580413557Z" level=warning msg="cleaning up after shim disconnected" id=8f92fa469bf7443d607d74f0f285cad5109c3ee4f83491c76711c1a8ec9ecf24 namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.580441258Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590163393Z" level=info msg="shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590197393Z" level=warning msg="cleaning up after shim disconnected" id=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.590265894Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.600700032Z" level=info msg="shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600935033Z" level=info msg="ignoring event" container=016df9da1c2fe9587bd03d5720bda38d6fe9d7025b32117c71ee41489e116d0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.600978833Z" level=info msg="ignoring event" container=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.611137470Z" level=info msg="ignoring event" container=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611370671Z" level=warning msg="cleaning up after shim disconnected" id=5be1e6ff2d64c87f30c341aa6b9929564e6c0091e38962b4b13494c0b7d72a5b namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611429671Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.611635772Z" level=info msg="shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.612703976Z" level=warning msg="cleaning up after shim disconnected" id=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.613303678Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.613702780Z" level=info msg="ignoring event" container=ceac54dac0e316aaf4040d2421422cf0842a79b2dc84a503c8a9c7c070c8e15b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.616524590Z" level=info msg="ignoring event" container=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618830499Z" level=info msg="shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.618976199Z" level=warning msg="cleaning up after shim disconnected" id=9e3089984f0b8568de22f9fb30b70ef091bbd763915b46c6486388cc519bf7bf namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.627451230Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.628495534Z" level=info msg="shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630007140Z" level=warning msg="cleaning up after shim disconnected" id=9beecb6d09eb679a96fb012dc2198f015538328d39996774947a253a87df150d namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.630196840Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.648394307Z" level=info msg="shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1331]: time="2024-01-22T10:58:54.650553715Z" level=info msg="ignoring event" container=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658301844Z" level=warning msg="cleaning up after shim disconnected" id=045aa03c1739d5b0a71e39dcf8a2cfd59ea66cc0f4d098e1693a93fb9995954c namespace=moby
Jan 22 10:58:54 functional-969800 dockerd[1337]: time="2024-01-22T10:58:54.658345944Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:58:59 functional-969800 dockerd[1331]: time="2024-01-22T10:58:59.444520533Z" level=info msg="ignoring event" container=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.447631244Z" level=info msg="shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448443247Z" level=warning msg="cleaning up after shim disconnected" id=9b0f2aa8c8f0ee8593f9bb3aba161534e2a13b53a834ed8405cbc32c76e39f87 namespace=moby
Jan 22 10:58:59 functional-969800 dockerd[1337]: time="2024-01-22T10:58:59.448523248Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.436253177Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.489047672Z" level=info msg="ignoring event" container=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490548477Z" level=info msg="shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.490993079Z" level=warning msg="cleaning up after shim disconnected" id=ffd32cd42f6049d5a76c9ad3e3ada8ed586f0fd99e2e5dc0ff152938a59e7e15 namespace=moby
Jan 22 10:59:04 functional-969800 dockerd[1337]: time="2024-01-22T10:59:04.491010379Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536562746Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.536997048Z" level=info msg="Daemon shutdown complete"
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537118848Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jan 22 10:59:04 functional-969800 dockerd[1331]: time="2024-01-22T10:59:04.537155648Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jan 22 10:59:05 functional-969800 systemd[1]: docker.service: Succeeded.
Jan 22 10:59:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 10:59:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 10:59:05 functional-969800 dockerd[5681]: time="2024-01-22T10:59:05.618184021Z" level=info msg="Starting up"
Jan 22 11:00:05 functional-969800 dockerd[5681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:00:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:00:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Jan 22 11:00:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:00:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:00:05 functional-969800 dockerd[5820]: time="2024-01-22T11:00:05.817795905Z" level=info msg="Starting up"
Jan 22 11:01:05 functional-969800 dockerd[5820]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:01:05 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:01:05 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Jan 22 11:01:05 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:01:05 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:01:06 functional-969800 dockerd[6034]: time="2024-01-22T11:01:06.042578101Z" level=info msg="Starting up"
Jan 22 11:02:06 functional-969800 dockerd[6034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:02:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:02:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jan 22 11:02:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:02:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:02:06 functional-969800 dockerd[6195]: time="2024-01-22T11:02:06.252465592Z" level=info msg="Starting up"
Jan 22 11:03:06 functional-969800 dockerd[6195]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:03:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:03:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Jan 22 11:03:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:03:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:03:06 functional-969800 dockerd[6342]: time="2024-01-22T11:03:06.467155089Z" level=info msg="Starting up"
Jan 22 11:04:06 functional-969800 dockerd[6342]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:04:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:04:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
Jan 22 11:04:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:04:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:04:06 functional-969800 dockerd[6610]: time="2024-01-22T11:04:06.803106897Z" level=info msg="Starting up"
Jan 22 11:05:06 functional-969800 dockerd[6610]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:05:06 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:05:06 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
Jan 22 11:05:06 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:05:06 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:05:07 functional-969800 dockerd[6768]: time="2024-01-22T11:05:07.038180841Z" level=info msg="Starting up"
Jan 22 11:06:07 functional-969800 dockerd[6768]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:06:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:06:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
Jan 22 11:06:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:06:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:06:07 functional-969800 dockerd[6963]: time="2024-01-22T11:06:07.289156413Z" level=info msg="Starting up"
Jan 22 11:07:07 functional-969800 dockerd[6963]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:07:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:07:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
Jan 22 11:07:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:07:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:07:07 functional-969800 dockerd[7116]: time="2024-01-22T11:07:07.566278979Z" level=info msg="Starting up"
Jan 22 11:08:07 functional-969800 dockerd[7116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:08:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:08:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
Jan 22 11:08:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:08:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:08:07 functional-969800 dockerd[7273]: time="2024-01-22T11:08:07.800042513Z" level=info msg="Starting up"
Jan 22 11:09:07 functional-969800 dockerd[7273]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:09:07 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:09:07 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
Jan 22 11:09:07 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:09:07 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:09:08 functional-969800 dockerd[7420]: time="2024-01-22T11:09:08.055663358Z" level=info msg="Starting up"
Jan 22 11:10:08 functional-969800 dockerd[7420]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:10:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:10:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
Jan 22 11:10:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:10:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:10:08 functional-969800 dockerd[7582]: time="2024-01-22T11:10:08.298276757Z" level=info msg="Starting up"
Jan 22 11:11:08 functional-969800 dockerd[7582]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:11:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:11:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
Jan 22 11:11:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:11:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:11:08 functional-969800 dockerd[7730]: time="2024-01-22T11:11:08.568515683Z" level=info msg="Starting up"
Jan 22 11:12:08 functional-969800 dockerd[7730]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:12:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:12:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
Jan 22 11:12:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:12:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:12:08 functional-969800 dockerd[7901]: time="2024-01-22T11:12:08.793897827Z" level=info msg="Starting up"
Jan 22 11:13:08 functional-969800 dockerd[7901]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:13:08 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:13:08 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
Jan 22 11:13:08 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:13:08 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:13:09 functional-969800 dockerd[8084]: time="2024-01-22T11:13:09.049002906Z" level=info msg="Starting up"
Jan 22 11:14:09 functional-969800 dockerd[8084]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:14:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:14:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
Jan 22 11:14:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:14:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:14:09 functional-969800 dockerd[8262]: time="2024-01-22T11:14:09.310111876Z" level=info msg="Starting up"
Jan 22 11:15:09 functional-969800 dockerd[8262]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:15:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:15:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
Jan 22 11:15:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:15:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:15:09 functional-969800 dockerd[8424]: time="2024-01-22T11:15:09.550451227Z" level=info msg="Starting up"
Jan 22 11:16:09 functional-969800 dockerd[8424]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:16:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:16:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
Jan 22 11:16:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:16:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:16:09 functional-969800 dockerd[8663]: time="2024-01-22T11:16:09.793536981Z" level=info msg="Starting up"
Jan 22 11:17:09 functional-969800 dockerd[8663]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:17:09 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:17:09 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
Jan 22 11:17:09 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:17:09 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:17:09 functional-969800 dockerd[8816]: time="2024-01-22T11:17:09.993681624Z" level=info msg="Starting up"
Jan 22 11:18:10 functional-969800 dockerd[8816]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:18:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:18:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
Jan 22 11:18:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:18:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:18:10 functional-969800 dockerd[8970]: time="2024-01-22T11:18:10.219717628Z" level=info msg="Starting up"
Jan 22 11:19:10 functional-969800 dockerd[8970]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:19:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:19:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
Jan 22 11:19:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:19:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:19:10 functional-969800 dockerd[9236]: time="2024-01-22T11:19:10.567700970Z" level=info msg="Starting up"
Jan 22 11:20:10 functional-969800 dockerd[9236]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:20:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:20:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
Jan 22 11:20:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:20:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:20:10 functional-969800 dockerd[9393]: time="2024-01-22T11:20:10.785376681Z" level=info msg="Starting up"
Jan 22 11:21:10 functional-969800 dockerd[9393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:21:10 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:21:10 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
Jan 22 11:21:10 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:21:10 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:21:11 functional-969800 dockerd[9646]: time="2024-01-22T11:21:11.049728847Z" level=info msg="Starting up"
Jan 22 11:22:11 functional-969800 dockerd[9646]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:22:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:22:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
Jan 22 11:22:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:22:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:22:11 functional-969800 dockerd[9809]: time="2024-01-22T11:22:11.276508964Z" level=info msg="Starting up"
Jan 22 11:23:11 functional-969800 dockerd[9809]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:23:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:23:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
Jan 22 11:23:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:23:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:23:11 functional-969800 dockerd[9961]: time="2024-01-22T11:23:11.489383259Z" level=info msg="Starting up"
Jan 22 11:24:11 functional-969800 dockerd[9961]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:24:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.
Jan 22 11:24:11 functional-969800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
Jan 22 11:24:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:24:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:24:11 functional-969800 dockerd[10291]: time="2024-01-22T11:24:11.800722506Z" level=info msg="Starting up"
Jan 22 11:24:46 functional-969800 dockerd[10291]: time="2024-01-22T11:24:46.714262322Z" level=info msg="Processing signal 'terminated'"
Jan 22 11:25:11 functional-969800 dockerd[10291]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:25:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:25:11 functional-969800 systemd[1]: Stopped Docker Application Container Engine.
Jan 22 11:25:11 functional-969800 systemd[1]: Starting Docker Application Container Engine...
Jan 22 11:25:11 functional-969800 dockerd[10628]: time="2024-01-22T11:25:11.884976767Z" level=info msg="Starting up"
Jan 22 11:26:11 functional-969800 dockerd[10628]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 11:26:11 functional-969800 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 22 11:26:11 functional-969800 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0122 11:26:11.979558    6184 out.go:239] * 
W0122 11:26:11.981138    6184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0122 11:26:11.982705    6184 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (93.18s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:168: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- sh -c "ping -c 1 172.23.176.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- sh -c "ping -c 1 172.23.176.1": exit status 1 (10.5226953s)

                                                
                                                
-- stdout --
	PING 172.23.176.1 (172.23.176.1): 56 data bytes
	
	--- 172.23.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:14:46.288735    7348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.23.176.1) from pod (busybox-5b5d89c9d6-flgvf): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- sh -c "ping -c 1 172.23.176.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- sh -c "ping -c 1 172.23.176.1": exit status 1 (10.5063581s)

                                                
                                                
-- stdout --
	PING 172.23.176.1 (172.23.176.1): 56 data bytes
	
	--- 172.23.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:14:57.338424    3036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.23.176.1) from pod (busybox-5b5d89c9d6-r46hz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-484200 -n multinode-484200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-484200 -n multinode-484200: (12.1151492s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 logs -n 25: (8.6480373s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-524900 ssh -- ls                    | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:04 UTC | 22 Jan 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-524900                           | mount-start-1-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:04 UTC | 22 Jan 24 12:04 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-524900 ssh -- ls                    | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:04 UTC | 22 Jan 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-524900                           | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:04 UTC | 22 Jan 24 12:05 UTC |
	| start   | -p mount-start-2-524900                           | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:05 UTC | 22 Jan 24 12:07 UTC |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host         | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:07 UTC |                     |
	|         | --profile mount-start-2-524900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-524900 ssh -- ls                    | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:07 UTC | 22 Jan 24 12:07 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-524900                           | mount-start-2-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:07 UTC | 22 Jan 24 12:07 UTC |
	| delete  | -p mount-start-1-524900                           | mount-start-1-524900 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:07 UTC | 22 Jan 24 12:07 UTC |
	| start   | -p multinode-484200                               | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:07 UTC | 22 Jan 24 12:14 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- apply -f                   | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- rollout                    | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- get pods -o                | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- get pods -o                | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-flgvf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-r46hz --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-flgvf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-r46hz --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-flgvf -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-r46hz -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- get pods -o                | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-flgvf                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC |                     |
	|         | busybox-5b5d89c9d6-flgvf -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.176.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC | 22 Jan 24 12:14 UTC |
	|         | busybox-5b5d89c9d6-r46hz                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-484200 -- exec                       | multinode-484200     | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:14 UTC |                     |
	|         | busybox-5b5d89c9d6-r46hz -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.176.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 12:07:49
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 12:07:49.001537   11512 out.go:296] Setting OutFile to fd 868 ...
	I0122 12:07:49.002577   11512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:07:49.002577   11512 out.go:309] Setting ErrFile to fd 816...
	I0122 12:07:49.002577   11512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:07:49.022462   11512 out.go:303] Setting JSON to false
	I0122 12:07:49.025199   11512 start.go:128] hostinfo: {"hostname":"minikube7","uptime":44637,"bootTime":1705880631,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 12:07:49.025825   11512 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 12:07:49.027313   11512 out.go:177] * [multinode-484200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 12:07:49.027974   11512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:07:49.028873   11512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 12:07:49.027974   11512 notify.go:220] Checking for updates...
	I0122 12:07:49.029477   11512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 12:07:49.031020   11512 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 12:07:49.031568   11512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 12:07:49.033222   11512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 12:07:54.388370   11512 out.go:177] * Using the hyperv driver based on user configuration
	I0122 12:07:54.389393   11512 start.go:298] selected driver: hyperv
	I0122 12:07:54.389393   11512 start.go:902] validating driver "hyperv" against <nil>
	I0122 12:07:54.389576   11512 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 12:07:54.439548   11512 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 12:07:54.440536   11512 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 12:07:54.440536   11512 cni.go:84] Creating CNI manager for ""
	I0122 12:07:54.440536   11512 cni.go:136] 0 nodes found, recommending kindnet
	I0122 12:07:54.440536   11512 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0122 12:07:54.440536   11512 start_flags.go:321] config:
	{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:07:54.440536   11512 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:07:54.442814   11512 out.go:177] * Starting control plane node multinode-484200 in cluster multinode-484200
	I0122 12:07:54.442814   11512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:07:54.442814   11512 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 12:07:54.442814   11512 cache.go:56] Caching tarball of preloaded images
	I0122 12:07:54.443816   11512 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:07:54.443816   11512 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:07:54.443816   11512 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:07:54.444817   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json: {Name:mkf8e00172dd279e44f1dd75fcf5ac0bed37497a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:07:54.445100   11512 start.go:365] acquiring machines lock for multinode-484200: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:07:54.446105   11512 start.go:369] acquired machines lock for "multinode-484200" in 0s
	I0122 12:07:54.446105   11512 start.go:93] Provisioning new machine with config: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:07:54.446105   11512 start.go:125] createHost starting for "" (driver="hyperv")
	I0122 12:07:54.447158   11512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 12:07:54.447445   11512 start.go:159] libmachine.API.Create for "multinode-484200" (driver="hyperv")
	I0122 12:07:54.447529   11512 client.go:168] LocalClient.Create starting
	I0122 12:07:54.448122   11512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0122 12:07:54.448436   11512 main.go:141] libmachine: Decoding PEM data...
	I0122 12:07:54.448485   11512 main.go:141] libmachine: Parsing certificate...
	I0122 12:07:54.448735   11512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0122 12:07:54.448958   11512 main.go:141] libmachine: Decoding PEM data...
	I0122 12:07:54.449007   11512 main.go:141] libmachine: Parsing certificate...
	I0122 12:07:54.449088   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0122 12:07:56.539439   11512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0122 12:07:56.539531   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:07:56.539531   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0122 12:07:58.263447   11512 main.go:141] libmachine: [stdout =====>] : False
	
	I0122 12:07:58.263447   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:07:58.263517   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:07:59.778006   11512 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:07:59.778177   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:07:59.778272   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:08:03.402255   11512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:08:03.402453   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:03.404830   11512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0122 12:08:03.881985   11512 main.go:141] libmachine: Creating SSH key...
	I0122 12:08:04.148587   11512 main.go:141] libmachine: Creating VM...
	I0122 12:08:04.148587   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:08:06.984187   11512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:08:06.984587   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:06.984724   11512 main.go:141] libmachine: Using switch "Default Switch"
	I0122 12:08:06.984724   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:08:08.738136   11512 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:08:08.738193   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:08.738193   11512 main.go:141] libmachine: Creating VHD
	I0122 12:08:08.738193   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0122 12:08:12.473950   11512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 467F2858-ABAE-4DDA-A521-F5374A5C1980
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0122 12:08:12.474092   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:12.474092   11512 main.go:141] libmachine: Writing magic tar header
	I0122 12:08:12.474092   11512 main.go:141] libmachine: Writing SSH key tar header
	I0122 12:08:12.483738   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0122 12:08:15.646732   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:15.646989   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:15.646989   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\disk.vhd' -SizeBytes 20000MB
	I0122 12:08:18.187596   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:18.187701   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:18.187792   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-484200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0122 12:08:21.695338   11512 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-484200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0122 12:08:21.695442   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:21.695442   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-484200 -DynamicMemoryEnabled $false
	I0122 12:08:23.913458   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:23.913744   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:23.914081   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-484200 -Count 2
	I0122 12:08:26.083104   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:26.083104   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:26.083236   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-484200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\boot2docker.iso'
	I0122 12:08:28.632005   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:28.632005   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:28.632005   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-484200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\disk.vhd'
	I0122 12:08:31.250394   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:31.250641   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:31.250641   11512 main.go:141] libmachine: Starting VM...
	I0122 12:08:31.250641   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200
	I0122 12:08:34.145676   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:34.145850   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:34.145883   11512 main.go:141] libmachine: Waiting for host to start...
	I0122 12:08:34.145883   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:08:36.433056   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:08:36.433056   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:36.433177   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:08:38.909713   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:38.909764   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:39.913056   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:08:42.174962   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:08:42.174962   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:42.174962   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:08:44.771767   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:44.772099   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:45.774582   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:08:47.989265   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:08:47.989323   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:47.989379   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:08:50.500951   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:50.500951   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:51.516015   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:08:53.740462   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:08:53.740462   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:53.740462   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:08:56.359972   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:08:56.359972   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:57.366350   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:08:59.580418   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:08:59.580418   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:08:59.580418   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:02.126775   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:02.126775   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:02.126854   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:04.266516   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:04.266812   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:04.266875   11512 machine.go:88] provisioning docker machine ...
	I0122 12:09:04.266875   11512 buildroot.go:166] provisioning hostname "multinode-484200"
	I0122 12:09:04.266875   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:06.385832   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:06.385832   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:06.385832   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:08.913747   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:08.913806   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:08.919738   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:08.931962   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:08.931962   11512 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200 && echo "multinode-484200" | sudo tee /etc/hostname
	I0122 12:09:09.081820   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200
	
	I0122 12:09:09.081933   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:11.251144   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:11.251144   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:11.251391   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:13.815237   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:13.815237   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:13.820892   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:13.821548   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:13.821548   11512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:09:13.959927   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:09:13.959927   11512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:09:13.959927   11512 buildroot.go:174] setting up certificates
	I0122 12:09:13.959927   11512 provision.go:83] configureAuth start
	I0122 12:09:13.959927   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:16.109994   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:16.109994   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:16.109994   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:18.653127   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:18.653324   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:18.653458   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:20.780607   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:20.780607   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:20.780748   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:23.328882   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:23.328975   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:23.329168   11512 provision.go:138] copyHostCerts
	I0122 12:09:23.329335   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:09:23.329694   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:09:23.329795   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:09:23.330362   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:09:23.331396   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:09:23.331396   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:09:23.331396   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:09:23.332320   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:09:23.333335   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:09:23.333335   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:09:23.333335   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:09:23.334102   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:09:23.335107   11512 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200 san=[172.23.177.163 172.23.177.163 localhost 127.0.0.1 minikube multinode-484200]
	I0122 12:09:23.657110   11512 provision.go:172] copyRemoteCerts
	I0122 12:09:23.670913   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:09:23.670981   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:25.815094   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:25.815094   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:25.815224   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:28.363732   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:28.363882   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:28.363882   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:09:28.471519   11512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8005169s)
	I0122 12:09:28.471612   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:09:28.472258   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:09:28.511096   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:09:28.511610   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0122 12:09:28.552406   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:09:28.552892   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 12:09:28.588661   11512 provision.go:86] duration metric: configureAuth took 14.6286702s
	I0122 12:09:28.588661   11512 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:09:28.589336   11512 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:09:28.589336   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:30.715667   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:30.715667   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:30.715667   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:33.273258   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:33.273331   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:33.280220   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:33.280721   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:33.280721   11512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:09:33.420927   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:09:33.421037   11512 buildroot.go:70] root file system type: tmpfs
	I0122 12:09:33.421272   11512 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:09:33.421377   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:35.530125   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:35.530279   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:35.530279   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:38.138402   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:38.138488   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:38.145618   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:38.145618   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:38.145618   11512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:09:38.309115   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:09:38.309198   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:40.427875   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:40.427875   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:40.428089   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:43.048501   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:43.048501   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:43.054836   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:43.055691   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:43.055691   11512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:09:43.992312   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:09:43.992392   11512 machine.go:91] provisioned docker machine in 39.7253419s
	I0122 12:09:43.992392   11512 client.go:171] LocalClient.Create took 1m49.5443812s
	I0122 12:09:43.992518   11512 start.go:167] duration metric: libmachine.API.Create for "multinode-484200" took 1m49.5445318s
	I0122 12:09:43.992614   11512 start.go:300] post-start starting for "multinode-484200" (driver="hyperv")
	I0122 12:09:43.992676   11512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:09:44.005667   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:09:44.006187   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:46.138830   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:46.138830   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:46.138830   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:48.702311   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:48.702390   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:48.704007   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:09:48.810172   11512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8037075s)
	I0122 12:09:48.825771   11512 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:09:48.832139   11512 command_runner.go:130] > NAME=Buildroot
	I0122 12:09:48.832139   11512 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:09:48.832224   11512 command_runner.go:130] > ID=buildroot
	I0122 12:09:48.832224   11512 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:09:48.832224   11512 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:09:48.832224   11512 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:09:48.832224   11512 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:09:48.832758   11512 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:09:48.833676   11512 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:09:48.833676   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:09:48.848420   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:09:48.863244   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:09:48.900492   11512 start.go:303] post-start completed in 4.9078563s
	I0122 12:09:48.902685   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:51.086134   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:51.086419   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:51.086419   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:53.645517   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:53.645707   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:53.645773   11512 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:09:53.648963   11512 start.go:128] duration metric: createHost completed in 1m59.2023333s
	I0122 12:09:53.648963   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:09:55.797047   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:09:55.797512   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:55.797512   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:09:58.328767   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:09:58.328767   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:09:58.335079   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:09:58.335782   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:09:58.335782   11512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 12:09:58.458837   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705925398.460044274
	
	I0122 12:09:58.458837   11512 fix.go:206] guest clock: 1705925398.460044274
	I0122 12:09:58.458837   11512 fix.go:219] Guest: 2024-01-22 12:09:58.460044274 +0000 UTC Remote: 2024-01-22 12:09:53.6489634 +0000 UTC m=+124.828553601 (delta=4.811080874s)
	I0122 12:09:58.459447   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:10:00.619921   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:10:00.619921   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:00.620029   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:10:03.197598   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:10:03.197598   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:03.203422   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:10:03.204463   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.177.163 22 <nil> <nil>}
	I0122 12:10:03.204463   11512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705925398
	I0122 12:10:03.342381   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:09:58 UTC 2024
	
	I0122 12:10:03.342460   11512 fix.go:226] clock set: Mon Jan 22 12:09:58 UTC 2024
	 (err=<nil>)
	I0122 12:10:03.342460   11512 start.go:83] releasing machines lock for "multinode-484200", held for 2m8.895788s
	I0122 12:10:03.342460   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:10:05.446083   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:10:05.446140   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:05.446140   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:10:08.011313   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:10:08.011576   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:08.016200   11512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:10:08.016362   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:10:08.030933   11512 ssh_runner.go:195] Run: cat /version.json
	I0122 12:10:08.030933   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:10:10.243687   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:10:10.243687   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:10.243687   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:10:10.243687   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:10.243816   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:10:10.243816   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:10:12.949407   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:10:12.949407   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:12.950005   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:10:12.969136   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:10:12.969508   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:10:12.969983   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:10:13.151557   11512 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:10:13.151631   11512 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 12:10:13.151631   11512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1353302s)
	I0122 12:10:13.151631   11512 ssh_runner.go:235] Completed: cat /version.json: (5.1206755s)
	I0122 12:10:13.164782   11512 ssh_runner.go:195] Run: systemctl --version
	I0122 12:10:13.172529   11512 command_runner.go:130] > systemd 247 (247)
	I0122 12:10:13.172702   11512 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 12:10:13.186254   11512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:10:13.194205   11512 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 12:10:13.194205   11512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:10:13.207307   11512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:10:13.233076   11512 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:10:13.233076   11512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:10:13.233160   11512 start.go:475] detecting cgroup driver to use...
	I0122 12:10:13.233487   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:10:13.261966   11512 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:10:13.285074   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:10:13.319536   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:10:13.336338   11512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:10:13.349158   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:10:13.377641   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:10:13.407029   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:10:13.435992   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:10:13.465144   11512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:10:13.493591   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:10:13.521407   11512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:10:13.535490   11512 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:10:13.548641   11512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:10:13.576072   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:10:13.744039   11512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:10:13.767855   11512 start.go:475] detecting cgroup driver to use...
	I0122 12:10:13.781462   11512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:10:13.801188   11512 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:10:13.801188   11512 command_runner.go:130] > [Unit]
	I0122 12:10:13.801188   11512 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:10:13.801423   11512 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:10:13.801423   11512 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:10:13.801423   11512 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:10:13.801423   11512 command_runner.go:130] > StartLimitBurst=3
	I0122 12:10:13.801423   11512 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:10:13.801423   11512 command_runner.go:130] > [Service]
	I0122 12:10:13.801423   11512 command_runner.go:130] > Type=notify
	I0122 12:10:13.801423   11512 command_runner.go:130] > Restart=on-failure
	I0122 12:10:13.801423   11512 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:10:13.801588   11512 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:10:13.801702   11512 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:10:13.801702   11512 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:10:13.801702   11512 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:10:13.801702   11512 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:10:13.801702   11512 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:10:13.801806   11512 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:10:13.801806   11512 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:10:13.801806   11512 command_runner.go:130] > ExecStart=
	I0122 12:10:13.801861   11512 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:10:13.801926   11512 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:10:13.801926   11512 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:10:13.801926   11512 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:10:13.801926   11512 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:10:13.801926   11512 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:10:13.801926   11512 command_runner.go:130] > LimitCORE=infinity
	I0122 12:10:13.801926   11512 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:10:13.802033   11512 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:10:13.802033   11512 command_runner.go:130] > TasksMax=infinity
	I0122 12:10:13.802033   11512 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:10:13.802033   11512 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:10:13.802033   11512 command_runner.go:130] > Delegate=yes
	I0122 12:10:13.802033   11512 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:10:13.802033   11512 command_runner.go:130] > KillMode=process
	I0122 12:10:13.802033   11512 command_runner.go:130] > [Install]
	I0122 12:10:13.802033   11512 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:10:13.815719   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:10:13.853242   11512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:10:13.894996   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:10:13.926231   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:10:13.959562   11512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:10:14.008617   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:10:14.027502   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:10:14.051418   11512 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:10:14.066225   11512 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:10:14.071770   11512 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:10:14.085545   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:10:14.099336   11512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:10:14.137513   11512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:10:14.303083   11512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:10:14.454216   11512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:10:14.454216   11512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:10:14.500107   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:10:14.663614   11512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:10:16.163384   11512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4996679s)
	I0122 12:10:16.177213   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:10:16.209709   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:10:16.240438   11512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:10:16.409443   11512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:10:16.575576   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:10:16.751150   11512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:10:16.785248   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:10:16.815930   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:10:16.977031   11512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:10:17.077745   11512 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:10:17.091121   11512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:10:17.098110   11512 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:10:17.098110   11512 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:10:17.098110   11512 command_runner.go:130] > Device: 16h/22d	Inode: 946         Links: 1
	I0122 12:10:17.098110   11512 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:10:17.098110   11512 command_runner.go:130] > Access: 2024-01-22 12:10:16.999019537 +0000
	I0122 12:10:17.098110   11512 command_runner.go:130] > Modify: 2024-01-22 12:10:16.999019537 +0000
	I0122 12:10:17.098110   11512 command_runner.go:130] > Change: 2024-01-22 12:10:17.003019537 +0000
	I0122 12:10:17.098110   11512 command_runner.go:130] >  Birth: -
	I0122 12:10:17.098110   11512 start.go:543] Will wait 60s for crictl version
	I0122 12:10:17.113103   11512 ssh_runner.go:195] Run: which crictl
	I0122 12:10:17.118059   11512 command_runner.go:130] > /usr/bin/crictl
	I0122 12:10:17.130721   11512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:10:17.199298   11512 command_runner.go:130] > Version:  0.1.0
	I0122 12:10:17.199298   11512 command_runner.go:130] > RuntimeName:  docker
	I0122 12:10:17.199298   11512 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:10:17.199420   11512 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:10:17.199420   11512 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:10:17.209865   11512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:10:17.243400   11512 command_runner.go:130] > 24.0.7
	I0122 12:10:17.253959   11512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:10:17.284532   11512 command_runner.go:130] > 24.0.7
	I0122 12:10:17.288116   11512 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:10:17.288116   11512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:10:17.292623   11512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:10:17.292623   11512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:10:17.292623   11512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:10:17.292623   11512 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:10:17.295246   11512 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:10:17.295246   11512 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:10:17.306772   11512 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:10:17.313022   11512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:10:17.330667   11512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:10:17.339663   11512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:10:17.361595   11512 docker.go:685] Got preloaded images: 
	I0122 12:10:17.361595   11512 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0122 12:10:17.374580   11512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0122 12:10:17.390592   11512 command_runner.go:139] > {"Repositories":{}}
	I0122 12:10:17.406860   11512 ssh_runner.go:195] Run: which lz4
	I0122 12:10:17.412277   11512 command_runner.go:130] > /usr/bin/lz4
	I0122 12:10:17.412429   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0122 12:10:17.423887   11512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0122 12:10:17.429441   11512 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 12:10:17.430076   11512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 12:10:17.430247   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0122 12:10:19.634181   11512 docker.go:649] Took 2.221513 seconds to copy over tarball
	I0122 12:10:19.647234   11512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 12:10:28.301409   11512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6540862s)
	I0122 12:10:28.301465   11512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 12:10:28.371208   11512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0122 12:10:28.387316   11512 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0122 12:10:28.387316   11512 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0122 12:10:28.429659   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:10:28.596336   11512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:10:31.526273   11512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9298621s)
	I0122 12:10:31.536257   11512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:10:31.562815   11512 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0122 12:10:31.563132   11512 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0122 12:10:31.563210   11512 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:10:31.563269   11512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0122 12:10:31.563320   11512 cache_images.go:84] Images are preloaded, skipping loading
	I0122 12:10:31.573089   11512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:10:31.604663   11512 command_runner.go:130] > cgroupfs
	I0122 12:10:31.605692   11512 cni.go:84] Creating CNI manager for ""
	I0122 12:10:31.605903   11512 cni.go:136] 1 nodes found, recommending kindnet
	I0122 12:10:31.605903   11512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:10:31.605991   11512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.177.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.177.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.177.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:10:31.606163   11512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.177.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200"
	  kubeletExtraArgs:
	    node-ip: 172.23.177.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:10:31.606163   11512 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.177.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:10:31.618555   11512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:10:31.635660   11512 command_runner.go:130] > kubeadm
	I0122 12:10:31.635660   11512 command_runner.go:130] > kubectl
	I0122 12:10:31.635729   11512 command_runner.go:130] > kubelet
	I0122 12:10:31.635729   11512 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:10:31.649808   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 12:10:31.663388   11512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0122 12:10:31.691353   11512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:10:31.714173   11512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0122 12:10:31.755725   11512 ssh_runner.go:195] Run: grep 172.23.177.163	control-plane.minikube.internal$ /etc/hosts
	I0122 12:10:31.760758   11512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.177.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:10:31.778757   11512 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.177.163
	I0122 12:10:31.779760   11512 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:31.779760   11512 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:10:31.779760   11512 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:10:31.780758   11512 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.key
	I0122 12:10:31.780758   11512 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.crt with IP's: []
	I0122 12:10:31.920237   11512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.crt ...
	I0122 12:10:31.920237   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.crt: {Name:mka67f10381827cfa0ef354f98e450e9c60bb679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:31.922024   11512 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.key ...
	I0122 12:10:31.922024   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.key: {Name:mke8624c8d53ad86125cd082f277f4e2cfb7ffda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:31.922297   11512 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.4d01dd63
	I0122 12:10:31.923331   11512 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.4d01dd63 with IP's: [172.23.177.163 10.96.0.1 127.0.0.1 10.0.0.1]
	I0122 12:10:32.102697   11512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.4d01dd63 ...
	I0122 12:10:32.102697   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.4d01dd63: {Name:mk69de577d931100d84729534c3d31898e334390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:32.104973   11512 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.4d01dd63 ...
	I0122 12:10:32.104973   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.4d01dd63: {Name:mk03ef80871b45c0b7c59d267869041ef170f388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:32.105344   11512 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.4d01dd63 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt
	I0122 12:10:32.116474   11512 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.4d01dd63 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key
	I0122 12:10:32.117545   11512 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key
	I0122 12:10:32.118618   11512 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt with IP's: []
	I0122 12:10:32.264871   11512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt ...
	I0122 12:10:32.264871   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt: {Name:mkef5ae580964dabf8ab058c009aba248bdb28ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:32.265953   11512 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key ...
	I0122 12:10:32.265953   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key: {Name:mk1c3bd8cdf19f5356aec75377a08e316e72223f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:10:32.267247   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0122 12:10:32.268248   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0122 12:10:32.268447   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0122 12:10:32.277313   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0122 12:10:32.277985   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:10:32.278129   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:10:32.278350   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:10:32.278545   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:10:32.279255   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:10:32.279506   11512 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:10:32.279702   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:10:32.279840   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:10:32.280250   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:10:32.280404   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:10:32.280661   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:10:32.281332   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:10:32.281457   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:10:32.281632   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:10:32.282338   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0122 12:10:32.325663   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 12:10:32.366326   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 12:10:32.406338   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 12:10:32.445138   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:10:32.486038   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:10:32.527090   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:10:32.566360   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:10:32.605996   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:10:32.644603   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:10:32.688583   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:10:32.731191   11512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 12:10:32.783977   11512 ssh_runner.go:195] Run: openssl version
	I0122 12:10:32.790657   11512 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:10:32.806925   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:10:32.840753   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:10:32.847699   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:10:32.847699   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:10:32.861706   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:10:32.869234   11512 command_runner.go:130] > 3ec20f2e
	I0122 12:10:32.884863   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:10:32.915892   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:10:32.944368   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:10:32.953680   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:10:32.953745   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:10:32.969714   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:10:32.977765   11512 command_runner.go:130] > b5213941
	I0122 12:10:32.991420   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:10:33.022133   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:10:33.051682   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:10:33.057690   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:10:33.058225   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:10:33.071538   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:10:33.078463   11512 command_runner.go:130] > 51391683
	I0122 12:10:33.093670   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:10:33.126407   11512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:10:33.132736   11512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:10:33.132958   11512 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:10:33.133292   11512 kubeadm.go:404] StartCluster: {Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:10:33.143665   11512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 12:10:33.182551   11512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 12:10:33.195626   11512 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0122 12:10:33.195626   11512 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0122 12:10:33.195626   11512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0122 12:10:33.210178   11512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 12:10:33.237795   11512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 12:10:33.251458   11512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0122 12:10:33.251458   11512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0122 12:10:33.252398   11512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0122 12:10:33.252398   11512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:10:33.252641   11512 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:10:33.252701   11512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 12:10:33.955535   11512 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:10:33.955662   11512 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:10:46.729868   11512 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0122 12:10:46.729868   11512 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0122 12:10:46.729954   11512 kubeadm.go:322] [preflight] Running pre-flight checks
	I0122 12:10:46.729954   11512 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:10:46.730068   11512 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 12:10:46.730068   11512 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 12:10:46.730225   11512 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 12:10:46.730248   11512 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 12:10:46.730248   11512 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 12:10:46.730248   11512 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0122 12:10:46.730248   11512 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 12:10:46.731949   11512 out.go:204]   - Generating certificates and keys ...
	I0122 12:10:46.730248   11512 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 12:10:46.732255   11512 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0122 12:10:46.732255   11512 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0122 12:10:46.732309   11512 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0122 12:10:46.732309   11512 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0122 12:10:46.732309   11512 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 12:10:46.732309   11512 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 12:10:46.732309   11512 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0122 12:10:46.732309   11512 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0122 12:10:46.732949   11512 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0122 12:10:46.733002   11512 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0122 12:10:46.733160   11512 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0122 12:10:46.733232   11512 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0122 12:10:46.733383   11512 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0122 12:10:46.733383   11512 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0122 12:10:46.733638   11512 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-484200] and IPs [172.23.177.163 127.0.0.1 ::1]
	I0122 12:10:46.733638   11512 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-484200] and IPs [172.23.177.163 127.0.0.1 ::1]
	I0122 12:10:46.733638   11512 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0122 12:10:46.733638   11512 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0122 12:10:46.734309   11512 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-484200] and IPs [172.23.177.163 127.0.0.1 ::1]
	I0122 12:10:46.734309   11512 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-484200] and IPs [172.23.177.163 127.0.0.1 ::1]
	I0122 12:10:46.734629   11512 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 12:10:46.734629   11512 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 12:10:46.734742   11512 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 12:10:46.734742   11512 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 12:10:46.734742   11512 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0122 12:10:46.734742   11512 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0122 12:10:46.734742   11512 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 12:10:46.734742   11512 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 12:10:46.735308   11512 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 12:10:46.735308   11512 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 12:10:46.735455   11512 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 12:10:46.735455   11512 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 12:10:46.735455   11512 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 12:10:46.735455   11512 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 12:10:46.735455   11512 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 12:10:46.735455   11512 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 12:10:46.736029   11512 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 12:10:46.736029   11512 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 12:10:46.736029   11512 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 12:10:46.736029   11512 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 12:10:46.736783   11512 out.go:204]   - Booting up control plane ...
	I0122 12:10:46.736783   11512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 12:10:46.736783   11512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 12:10:46.736783   11512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 12:10:46.736783   11512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 12:10:46.737384   11512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 12:10:46.737384   11512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 12:10:46.737384   11512 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:10:46.737384   11512 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:10:46.738022   11512 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:10:46.738022   11512 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:10:46.738022   11512 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:10:46.738022   11512 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0122 12:10:46.738022   11512 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 12:10:46.738022   11512 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0122 12:10:46.738718   11512 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.012226 seconds
	I0122 12:10:46.738748   11512 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.012226 seconds
	I0122 12:10:46.738748   11512 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 12:10:46.738748   11512 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 12:10:46.738748   11512 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 12:10:46.739281   11512 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 12:10:46.739415   11512 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 12:10:46.739452   11512 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0122 12:10:46.739665   11512 kubeadm.go:322] [mark-control-plane] Marking the node multinode-484200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 12:10:46.739665   11512 command_runner.go:130] > [mark-control-plane] Marking the node multinode-484200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 12:10:46.739665   11512 kubeadm.go:322] [bootstrap-token] Using token: ynek14.yrxwudewy2q4mi85
	I0122 12:10:46.739665   11512 command_runner.go:130] > [bootstrap-token] Using token: ynek14.yrxwudewy2q4mi85
	I0122 12:10:46.740585   11512 out.go:204]   - Configuring RBAC rules ...
	I0122 12:10:46.740585   11512 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 12:10:46.740585   11512 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 12:10:46.741245   11512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 12:10:46.741245   11512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 12:10:46.741245   11512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 12:10:46.741245   11512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 12:10:46.741892   11512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 12:10:46.741892   11512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 12:10:46.741892   11512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 12:10:46.741892   11512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 12:10:46.742525   11512 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 12:10:46.742525   11512 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 12:10:46.742525   11512 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 12:10:46.742525   11512 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 12:10:46.742525   11512 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0122 12:10:46.742525   11512 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0122 12:10:46.743154   11512 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0122 12:10:46.743182   11512 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0122 12:10:46.743182   11512 kubeadm.go:322] 
	I0122 12:10:46.743246   11512 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0122 12:10:46.743246   11512 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0122 12:10:46.743246   11512 kubeadm.go:322] 
	I0122 12:10:46.743246   11512 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0122 12:10:46.743246   11512 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0122 12:10:46.743246   11512 kubeadm.go:322] 
	I0122 12:10:46.743246   11512 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0122 12:10:46.743246   11512 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0122 12:10:46.743246   11512 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 12:10:46.743246   11512 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 12:10:46.743246   11512 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 12:10:46.743246   11512 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 12:10:46.744114   11512 kubeadm.go:322] 
	I0122 12:10:46.744114   11512 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0122 12:10:46.744114   11512 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0122 12:10:46.744114   11512 kubeadm.go:322] 
	I0122 12:10:46.744114   11512 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 12:10:46.744114   11512 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 12:10:46.744114   11512 kubeadm.go:322] 
	I0122 12:10:46.744114   11512 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0122 12:10:46.744114   11512 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0122 12:10:46.744114   11512 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 12:10:46.744114   11512 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 12:10:46.744114   11512 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 12:10:46.745113   11512 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 12:10:46.745113   11512 kubeadm.go:322] 
	I0122 12:10:46.745113   11512 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0122 12:10:46.745113   11512 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 12:10:46.745113   11512 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0122 12:10:46.745113   11512 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0122 12:10:46.745113   11512 kubeadm.go:322] 
	I0122 12:10:46.745113   11512 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ynek14.yrxwudewy2q4mi85 \
	I0122 12:10:46.745113   11512 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ynek14.yrxwudewy2q4mi85 \
	I0122 12:10:46.745113   11512 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 \
	I0122 12:10:46.745113   11512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 \
	I0122 12:10:46.745113   11512 kubeadm.go:322] 	--control-plane 
	I0122 12:10:46.745113   11512 command_runner.go:130] > 	--control-plane 
	I0122 12:10:46.746114   11512 kubeadm.go:322] 
	I0122 12:10:46.746114   11512 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0122 12:10:46.746114   11512 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0122 12:10:46.746114   11512 kubeadm.go:322] 
	I0122 12:10:46.746114   11512 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ynek14.yrxwudewy2q4mi85 \
	I0122 12:10:46.746114   11512 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ynek14.yrxwudewy2q4mi85 \
	I0122 12:10:46.746114   11512 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:10:46.746114   11512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:10:46.746114   11512 cni.go:84] Creating CNI manager for ""
	I0122 12:10:46.746114   11512 cni.go:136] 1 nodes found, recommending kindnet
	I0122 12:10:46.747129   11512 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0122 12:10:46.760097   11512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:10:46.767424   11512 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:10:46.767642   11512 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:10:46.767642   11512 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:10:46.767642   11512 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:10:46.767642   11512 command_runner.go:130] > Access: 2024-01-22 12:08:58.504080900 +0000
	I0122 12:10:46.767642   11512 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:10:46.767767   11512 command_runner.go:130] > Change: 2024-01-22 12:08:50.118000000 +0000
	I0122 12:10:46.767819   11512 command_runner.go:130] >  Birth: -
	I0122 12:10:46.768713   11512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:10:46.768772   11512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:10:46.850201   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:10:48.358367   11512 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0122 12:10:48.358367   11512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0122 12:10:48.358367   11512 command_runner.go:130] > serviceaccount/kindnet created
	I0122 12:10:48.358367   11512 command_runner.go:130] > daemonset.apps/kindnet created
	I0122 12:10:48.358367   11512 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5081595s)
	I0122 12:10:48.358367   11512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 12:10:48.375362   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_10_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:48.377364   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:48.380362   11512 command_runner.go:130] > -16
	I0122 12:10:48.380362   11512 ops.go:34] apiserver oom_adj: -16
	I0122 12:10:48.536819   11512 command_runner.go:130] > node/multinode-484200 labeled
	I0122 12:10:48.536997   11512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0122 12:10:48.550670   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:48.658289   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:49.059768   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:49.182387   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:49.559854   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:49.671106   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:50.067778   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:50.171467   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:50.564490   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:50.670740   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:51.055279   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:51.182650   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:51.560591   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:51.679563   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:52.052478   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:52.177792   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:52.565213   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:52.681281   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:53.060156   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:53.175774   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:53.554548   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:53.660433   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:54.061716   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:54.165730   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:54.561512   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:54.672835   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:55.049749   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:55.168311   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:55.560801   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:55.679144   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:56.066597   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:56.192115   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:56.559205   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:56.685770   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:57.066453   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:57.208500   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:57.550752   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:57.677572   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:58.061050   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:58.185982   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:58.564096   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:58.698406   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:59.057862   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:59.177647   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:10:59.559054   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:10:59.790003   11512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0122 12:11:00.050844   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:11:00.202809   11512 command_runner.go:130] > NAME      SECRETS   AGE
	I0122 12:11:00.202809   11512 command_runner.go:130] > default   0         1s
	I0122 12:11:00.202809   11512 kubeadm.go:1088] duration metric: took 11.8443897s to wait for elevateKubeSystemPrivileges.
	I0122 12:11:00.202809   11512 kubeadm.go:406] StartCluster complete in 27.0693975s
	I0122 12:11:00.202809   11512 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:11:00.203815   11512 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:11:00.204808   11512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:11:00.206815   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 12:11:00.206815   11512 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0122 12:11:00.206815   11512 addons.go:69] Setting storage-provisioner=true in profile "multinode-484200"
	I0122 12:11:00.206815   11512 addons.go:234] Setting addon storage-provisioner=true in "multinode-484200"
	I0122 12:11:00.206815   11512 addons.go:69] Setting default-storageclass=true in profile "multinode-484200"
	I0122 12:11:00.206815   11512 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:11:00.206815   11512 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:11:00.206815   11512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-484200"
	I0122 12:11:00.207814   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:11:00.207814   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:11:00.224520   11512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:11:00.225330   11512 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.177.163:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:11:00.227129   11512 cert_rotation.go:137] Starting client certificate rotation controller
	I0122 12:11:00.227931   11512 round_trippers.go:463] GET https://172.23.177.163:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:11:00.227931   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:00.228001   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:00.228001   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:00.243896   11512 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0122 12:11:00.244218   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:00.244218   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:00.244323   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:00.244371   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:00.244449   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:00.244449   11512 round_trippers.go:580]     Content-Length: 291
	I0122 12:11:00.244551   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:00 GMT
	I0122 12:11:00.244589   11512 round_trippers.go:580]     Audit-Id: 7bee344c-21b4-482e-8051-7dea772348fc
	I0122 12:11:00.244589   11512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"382","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0122 12:11:00.245189   11512 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"382","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0122 12:11:00.245189   11512 round_trippers.go:463] PUT https://172.23.177.163:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:11:00.245189   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:00.245189   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:00.245189   11512 round_trippers.go:473]     Content-Type: application/json
	I0122 12:11:00.245189   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:00.252878   11512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:11:00.253881   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:00.253881   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:00 GMT
	I0122 12:11:00.253881   11512 round_trippers.go:580]     Audit-Id: d9d70ad2-5088-43a4-aa9b-bbe520f7f961
	I0122 12:11:00.253881   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:00.253881   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:00.253881   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:00.253881   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:00.253881   11512 round_trippers.go:580]     Content-Length: 291
	I0122 12:11:00.253881   11512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"383","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0122 12:11:00.575803   11512 command_runner.go:130] > apiVersion: v1
	I0122 12:11:00.575903   11512 command_runner.go:130] > data:
	I0122 12:11:00.575903   11512 command_runner.go:130] >   Corefile: |
	I0122 12:11:00.575998   11512 command_runner.go:130] >     .:53 {
	I0122 12:11:00.575998   11512 command_runner.go:130] >         errors
	I0122 12:11:00.575998   11512 command_runner.go:130] >         health {
	I0122 12:11:00.575998   11512 command_runner.go:130] >            lameduck 5s
	I0122 12:11:00.575998   11512 command_runner.go:130] >         }
	I0122 12:11:00.575998   11512 command_runner.go:130] >         ready
	I0122 12:11:00.576067   11512 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0122 12:11:00.576067   11512 command_runner.go:130] >            pods insecure
	I0122 12:11:00.576067   11512 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0122 12:11:00.576067   11512 command_runner.go:130] >            ttl 30
	I0122 12:11:00.576067   11512 command_runner.go:130] >         }
	I0122 12:11:00.576067   11512 command_runner.go:130] >         prometheus :9153
	I0122 12:11:00.576067   11512 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0122 12:11:00.576067   11512 command_runner.go:130] >            max_concurrent 1000
	I0122 12:11:00.576288   11512 command_runner.go:130] >         }
	I0122 12:11:00.576370   11512 command_runner.go:130] >         cache 30
	I0122 12:11:00.576423   11512 command_runner.go:130] >         loop
	I0122 12:11:00.576423   11512 command_runner.go:130] >         reload
	I0122 12:11:00.576423   11512 command_runner.go:130] >         loadbalance
	I0122 12:11:00.576423   11512 command_runner.go:130] >     }
	I0122 12:11:00.576423   11512 command_runner.go:130] > kind: ConfigMap
	I0122 12:11:00.576423   11512 command_runner.go:130] > metadata:
	I0122 12:11:00.576741   11512 command_runner.go:130] >   creationTimestamp: "2024-01-22T12:10:46Z"
	I0122 12:11:00.576826   11512 command_runner.go:130] >   name: coredns
	I0122 12:11:00.576826   11512 command_runner.go:130] >   namespace: kube-system
	I0122 12:11:00.576892   11512 command_runner.go:130] >   resourceVersion: "255"
	I0122 12:11:00.576955   11512 command_runner.go:130] >   uid: b9768c33-b82e-4fdb-8e5b-45028bdc3da3
	I0122 12:11:00.577349   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 12:11:00.736879   11512 round_trippers.go:463] GET https://172.23.177.163:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:11:00.736954   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:00.736954   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:00.736954   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:00.741185   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:00.741441   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:00.741441   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:00 GMT
	I0122 12:11:00.741441   11512 round_trippers.go:580]     Audit-Id: 63ca6455-ec2e-40d4-99ca-e85f03ca1daa
	I0122 12:11:00.741441   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:00.741441   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:00.741549   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:00.741585   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:00.741585   11512 round_trippers.go:580]     Content-Length: 291
	I0122 12:11:00.741680   11512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"393","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:11:00.741873   11512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:11:00.742018   11512 start.go:223] Will wait 6m0s for node &{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:11:00.743455   11512 out.go:177] * Verifying Kubernetes components...
	I0122 12:11:00.758767   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:11:01.408725   11512 command_runner.go:130] > configmap/coredns replaced
	I0122 12:11:01.408725   11512 start.go:929] {"host.minikube.internal": 172.23.176.1} host record injected into CoreDNS's ConfigMap
	I0122 12:11:01.410739   11512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:11:01.412019   11512 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.177.163:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:11:01.412784   11512 node_ready.go:35] waiting up to 6m0s for node "multinode-484200" to be "Ready" ...
	I0122 12:11:01.413503   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:01.413586   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:01.413586   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:01.413586   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:01.418032   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:01.418082   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:01.418082   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:01 GMT
	I0122 12:11:01.418082   11512 round_trippers.go:580]     Audit-Id: fd8fdd2d-1b51-423b-bf60-437b127cb36a
	I0122 12:11:01.418189   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:01.418189   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:01.418189   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:01.418189   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:01.419015   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:01.928419   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:01.928419   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:01.928501   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:01.928501   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:01.931995   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:01.931995   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:01.931995   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:01.932942   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:01.932942   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:01 GMT
	I0122 12:11:01.932942   11512 round_trippers.go:580]     Audit-Id: 72a7674b-cd3e-4d0a-8d90-6e8a7f0baab9
	I0122 12:11:01.932942   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:01.933026   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:01.933308   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:02.422671   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:02.422861   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:02.422861   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:02.422861   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:02.426670   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:02.427604   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:02.427662   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:02 GMT
	I0122 12:11:02.427662   11512 round_trippers.go:580]     Audit-Id: 3a7b1b9d-1cab-4382-a5f0-a1d095223b86
	I0122 12:11:02.427662   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:02.427662   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:02.427662   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:02.427662   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:02.428343   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:02.548165   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:02.548385   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:02.548165   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:02.548633   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:02.549836   11512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:11:02.549591   11512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:11:02.550410   11512 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 12:11:02.550410   11512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 12:11:02.550410   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:11:02.551378   11512 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.177.163:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:11:02.552312   11512 addons.go:234] Setting addon default-storageclass=true in "multinode-484200"
	I0122 12:11:02.552481   11512 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:11:02.553545   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:11:02.914968   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:02.914968   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:02.914968   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:02.914968   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:02.919543   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:02.919605   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:02.919605   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:02.919605   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:02 GMT
	I0122 12:11:02.919605   11512 round_trippers.go:580]     Audit-Id: 1ac2517e-6a2e-48ee-84e6-a57c5bd71c1b
	I0122 12:11:02.919605   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:02.919605   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:02.919605   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:02.919605   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:03.426179   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:03.426179   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:03.426179   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:03.426179   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:03.435127   11512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:11:03.435127   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:03.435127   11512 round_trippers.go:580]     Audit-Id: fdcdb1bb-b488-4189-9433-0c09e5d3fad6
	I0122 12:11:03.435127   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:03.435127   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:03.435127   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:03.435127   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:03.435255   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:03 GMT
	I0122 12:11:03.435307   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:03.435903   11512 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:11:03.920151   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:03.920151   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:03.920151   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:03.920151   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:03.923740   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:03.924723   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:03.924723   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:03.924723   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:03 GMT
	I0122 12:11:03.924723   11512 round_trippers.go:580]     Audit-Id: 7f393125-9753-47f7-a23c-b768d7b9ef03
	I0122 12:11:03.924723   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:03.924723   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:03.924723   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:03.925054   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:04.428562   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:04.428562   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:04.428562   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:04.428562   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:04.433927   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:04.433989   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:04.434054   11512 round_trippers.go:580]     Audit-Id: 513fe995-f6ad-4de0-a9e0-49f51d4d1aeb
	I0122 12:11:04.434054   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:04.434114   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:04.434114   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:04.434224   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:04.434224   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:04 GMT
	I0122 12:11:04.434446   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:04.871192   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:04.871321   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:04.871321   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:04.871321   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:04.871321   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:11:04.871321   11512 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 12:11:04.871321   11512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 12:11:04.871321   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:11:04.919241   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:04.919241   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:04.919241   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:04.919241   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:04.923814   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:04.923814   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:04.923878   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:04.923878   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:04.923878   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:04.923878   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:04.923878   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:04 GMT
	I0122 12:11:04.923878   11512 round_trippers.go:580]     Audit-Id: f6d68bea-1421-4ad0-b036-d88d21dc49d4
	I0122 12:11:04.923878   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:05.427326   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:05.427416   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:05.427416   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:05.427416   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:05.433054   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:05.433149   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:05.433231   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:05 GMT
	I0122 12:11:05.433283   11512 round_trippers.go:580]     Audit-Id: 9c074c20-02c6-4de4-a118-715eeffc303d
	I0122 12:11:05.433283   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:05.433283   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:05.433283   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:05.433283   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:05.433630   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:05.919997   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:05.920084   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:05.920084   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:05.920141   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:05.924847   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:05.924847   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:05.924847   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:05.924847   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:05.924847   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:05.924847   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:05 GMT
	I0122 12:11:05.924847   11512 round_trippers.go:580]     Audit-Id: 6256d3ca-b316-48b1-b8c2-10883a0e79cd
	I0122 12:11:05.924847   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:05.924847   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:05.925760   11512 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:11:06.426097   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:06.426097   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:06.426097   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:06.426097   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:06.430934   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:06.430934   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:06.430934   11512 round_trippers.go:580]     Audit-Id: e016d7c0-9eb1-433d-a21a-6537a07759a7
	I0122 12:11:06.430934   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:06.431054   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:06.431054   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:06.431054   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:06.431054   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:06 GMT
	I0122 12:11:06.431355   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:06.918635   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:06.918635   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:06.918635   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:06.918635   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:06.922750   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:06.922750   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:06.922750   11512 round_trippers.go:580]     Audit-Id: f6bf1028-9f4d-4805-9016-63dad70ca540
	I0122 12:11:06.922750   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:06.922750   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:06.922750   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:06.922750   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:06.922750   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:06 GMT
	I0122 12:11:06.922924   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:07.421765   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:07.421765   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:07.421765   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:07.421765   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:07.426380   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:07.426380   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:07.426676   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:07.426676   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:07 GMT
	I0122 12:11:07.426676   11512 round_trippers.go:580]     Audit-Id: 396316e5-c52b-4dfc-ab31-b391d3f98ba9
	I0122 12:11:07.426676   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:07.426676   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:07.426676   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:07.426676   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:07.426676   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:07.426676   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:11:07.426676   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:07.873146   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:11:07.873146   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:07.873146   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:11:07.916684   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:07.916684   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:07.916684   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:07.916684   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:07.919683   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:07.919683   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:07.919683   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:07.919683   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:07.920688   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:07.920688   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:07.920688   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:07 GMT
	I0122 12:11:07.920688   11512 round_trippers.go:580]     Audit-Id: 11e9e680-aac5-4f56-b90c-8052794c8773
	I0122 12:11:07.920688   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:08.011307   11512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 12:11:08.414300   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:08.414427   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:08.414427   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:08.414427   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:08.418350   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:08.418679   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:08.418679   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:08.418679   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:08.418679   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:08.418769   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:08 GMT
	I0122 12:11:08.418769   11512 round_trippers.go:580]     Audit-Id: e4013bda-5346-4f88-b087-c0145eb83b89
	I0122 12:11:08.418769   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:08.418927   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:08.419633   11512 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:11:08.873119   11512 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0122 12:11:08.873119   11512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0122 12:11:08.873119   11512 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0122 12:11:08.873119   11512 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0122 12:11:08.873119   11512 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0122 12:11:08.873119   11512 command_runner.go:130] > pod/storage-provisioner created
	I0122 12:11:08.918184   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:08.918184   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:08.918184   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:08.918184   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:08.921802   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:08.922673   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:08.922673   11512 round_trippers.go:580]     Audit-Id: 3db73885-8e50-4c0c-94cc-800d5bff210b
	I0122 12:11:08.922673   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:08.922673   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:08.922673   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:08.922673   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:08.922673   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:08 GMT
	I0122 12:11:08.922673   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:09.425999   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:09.425999   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:09.425999   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:09.425999   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:09.430572   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:09.430572   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:09.430572   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:09.430572   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:09 GMT
	I0122 12:11:09.430572   11512 round_trippers.go:580]     Audit-Id: cf9fde22-97d6-4386-b519-876b2931ae0b
	I0122 12:11:09.430572   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:09.430572   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:09.430572   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:09.430572   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:09.918715   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:09.918715   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:09.918715   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:09.918715   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:09.924048   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:09.924048   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:09.924048   11512 round_trippers.go:580]     Audit-Id: bd2adc03-756a-4bc2-a109-f0c270b994b5
	I0122 12:11:09.924048   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:09.924048   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:09.924048   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:09.924048   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:09.924048   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:09 GMT
	I0122 12:11:09.924048   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:10.108956   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:11:10.108956   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:10.109490   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:11:10.247581   11512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 12:11:10.424628   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:10.424628   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:10.424628   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:10.424628   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:10.428631   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:10.428631   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:10.428631   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:10 GMT
	I0122 12:11:10.428631   11512 round_trippers.go:580]     Audit-Id: 7f23a773-9fe5-42f0-b7f3-c740128bfc21
	I0122 12:11:10.428631   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:10.428631   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:10.428631   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:10.428631   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:10.429502   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:10.429502   11512 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:11:10.606390   11512 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0122 12:11:10.606390   11512 round_trippers.go:463] GET https://172.23.177.163:8443/apis/storage.k8s.io/v1/storageclasses
	I0122 12:11:10.606390   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:10.606390   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:10.606390   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:10.610264   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:10.610264   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:10.610264   11512 round_trippers.go:580]     Audit-Id: 013e25ad-1f93-4f34-8257-f089ee1cbd33
	I0122 12:11:10.610264   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:10.610264   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:10.610264   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:10.610264   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:10.610264   11512 round_trippers.go:580]     Content-Length: 1273
	I0122 12:11:10.610264   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:10 GMT
	I0122 12:11:10.610264   11512 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"standard","uid":"e5a00cd2-2826-4b68-895a-2aa6d781b488","resourceVersion":"415","creationTimestamp":"2024-01-22T12:11:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-22T12:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0122 12:11:10.611186   11512 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e5a00cd2-2826-4b68-895a-2aa6d781b488","resourceVersion":"415","creationTimestamp":"2024-01-22T12:11:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-22T12:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0122 12:11:10.611383   11512 round_trippers.go:463] PUT https://172.23.177.163:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0122 12:11:10.611383   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:10.611383   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:10.611383   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:10.611383   11512 round_trippers.go:473]     Content-Type: application/json
	I0122 12:11:10.615387   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:10.615387   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:10.615387   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:10.615387   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:10.615387   11512 round_trippers.go:580]     Content-Length: 1220
	I0122 12:11:10.615387   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:10 GMT
	I0122 12:11:10.615387   11512 round_trippers.go:580]     Audit-Id: cf6bbb8f-3db3-4a30-9485-1198b9bc7848
	I0122 12:11:10.615387   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:10.615387   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:10.615387   11512 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e5a00cd2-2826-4b68-895a-2aa6d781b488","resourceVersion":"415","creationTimestamp":"2024-01-22T12:11:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-22T12:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0122 12:11:10.616594   11512 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0122 12:11:10.616810   11512 addons.go:505] enable addons completed in 10.4099487s: enabled=[storage-provisioner default-storageclass]
	I0122 12:11:10.917560   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:10.917640   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:10.917716   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:10.917716   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:10.922138   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:10.923132   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:10.923162   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:10.923162   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:10 GMT
	I0122 12:11:10.923162   11512 round_trippers.go:580]     Audit-Id: 737e502a-b484-4de2-b324-479933742594
	I0122 12:11:10.923162   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:10.923162   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:10.923249   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:10.923486   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:11.420722   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:11.420796   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:11.420796   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:11.420796   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:11.425241   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:11.425241   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:11.425241   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:11.425241   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:11.425241   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:11 GMT
	I0122 12:11:11.425241   11512 round_trippers.go:580]     Audit-Id: 35e99022-e465-41d5-877a-6850e5398186
	I0122 12:11:11.425241   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:11.425241   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:11.426251   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:11.924885   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:11.925053   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:11.925053   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:11.925053   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:11.929462   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:11.929462   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:11.929462   11512 round_trippers.go:580]     Audit-Id: fb4df048-1ae7-4434-8601-78228ecc5de1
	I0122 12:11:11.929462   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:11.929462   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:11.929462   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:11.929462   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:11.929462   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:11 GMT
	I0122 12:11:11.930368   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"364","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0122 12:11:12.415054   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:12.415054   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.415054   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.415054   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.418082   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:12.419116   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.419116   11512 round_trippers.go:580]     Audit-Id: 6999ddf5-baae-4878-a25e-df38c3813843
	I0122 12:11:12.419116   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.419116   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.419240   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.419263   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.419263   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.419263   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:12.420140   11512 node_ready.go:49] node "multinode-484200" has status "Ready":"True"
	I0122 12:11:12.420211   11512 node_ready.go:38] duration metric: took 11.0068361s waiting for node "multinode-484200" to be "Ready" ...
	I0122 12:11:12.420211   11512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:11:12.420211   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:11:12.420211   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.420211   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.420211   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.425805   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:12.425805   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.425872   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.425872   11512 round_trippers.go:580]     Audit-Id: b5be15fc-c211-4215-9071-9027bf5fdad6
	I0122 12:11:12.425872   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.425872   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.425872   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.425872   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.428628   11512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54012 chars]
	I0122 12:11:12.443277   11512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:12.443865   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:12.443865   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.443917   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.443917   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.447664   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:12.447799   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.447799   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.447908   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.447908   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.447908   11512 round_trippers.go:580]     Audit-Id: f81ceb91-dc21-4bfc-97f9-620ada03386e
	I0122 12:11:12.447908   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.447908   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.448033   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0122 12:11:12.448865   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:12.448865   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.448865   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.448865   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.452451   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:12.452589   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.452589   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.452728   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.452748   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.452748   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.452748   11512 round_trippers.go:580]     Audit-Id: 900c2ee1-ef7d-4689-87fb-a29d370ea23f
	I0122 12:11:12.452748   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.453185   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:12.951124   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:12.951124   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.951124   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.951124   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.954150   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:12.954150   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.954150   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.954150   11512 round_trippers.go:580]     Audit-Id: 0f0bca67-0c72-4a9a-a490-0bc43348a995
	I0122 12:11:12.955173   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.955173   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.955173   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.955173   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.955173   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0122 12:11:12.955173   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:12.955173   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:12.955173   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:12.955173   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:12.959145   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:12.959208   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:12.959208   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:12.959314   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:12.959314   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:12 GMT
	I0122 12:11:12.959314   11512 round_trippers.go:580]     Audit-Id: 80150242-867a-40f5-9e55-244fc75d2a38
	I0122 12:11:12.959314   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:12.959314   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:12.959696   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:13.444374   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:13.444374   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:13.444374   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:13.444374   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:13.455333   11512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0122 12:11:13.455333   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:13.455578   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:13.455578   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:13.455578   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:13.455578   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:13.455578   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:13 GMT
	I0122 12:11:13.455578   11512 round_trippers.go:580]     Audit-Id: d90ef150-49b8-4235-952b-bdc7236d7a08
	I0122 12:11:13.455847   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0122 12:11:13.456603   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:13.456670   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:13.456670   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:13.456670   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:13.459467   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:13.460171   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:13.460171   11512 round_trippers.go:580]     Audit-Id: f2bbf8b8-1262-4d86-9f57-cde5bcd16767
	I0122 12:11:13.460171   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:13.460171   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:13.460171   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:13.460171   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:13.460171   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:13 GMT
	I0122 12:11:13.460652   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:13.953679   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:13.953743   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:13.953743   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:13.953743   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:13.958055   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:13.958407   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:13.958407   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:13.958407   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:13.958407   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:13.958407   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:13.958508   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:13 GMT
	I0122 12:11:13.958508   11512 round_trippers.go:580]     Audit-Id: 9d2b215e-0442-476f-83be-7329838fa038
	I0122 12:11:13.958886   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0122 12:11:13.959246   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:13.959246   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:13.959246   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:13.959246   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:13.961843   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:13.961843   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:13.961843   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:13 GMT
	I0122 12:11:13.961843   11512 round_trippers.go:580]     Audit-Id: de9ac148-4ec1-42bc-aa62-04467246f499
	I0122 12:11:13.961843   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:13.961843   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:13.961843   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:13.961843   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:13.969067   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.457848   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:14.457928   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.457928   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.457928   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.463039   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:14.463203   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.463203   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.463203   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.463203   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.463203   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.463203   11512 round_trippers.go:580]     Audit-Id: 2c4d5b24-5259-4cee-bbde-83b9de9a56c5
	I0122 12:11:14.463203   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.463365   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"426","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0122 12:11:14.464327   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:14.464327   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.464327   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.464327   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.467508   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:14.467508   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.467886   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.467886   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.467886   11512 round_trippers.go:580]     Audit-Id: 978e730c-a86e-48f4-b21c-3bc2473dbd30
	I0122 12:11:14.467886   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.467886   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.467886   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.468204   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.468861   11512 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:11:14.956589   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:11:14.956589   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.956589   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.956589   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.960955   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:14.960955   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.960955   11512 round_trippers.go:580]     Audit-Id: 1836ca3b-ff7d-44cd-8e3b-583621febf68
	I0122 12:11:14.960955   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.960955   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.960955   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.961141   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.961141   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.961503   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"438","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0122 12:11:14.962319   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:14.962319   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.962319   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.962319   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.965239   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:14.965239   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.965239   11512 round_trippers.go:580]     Audit-Id: 424419a4-f06a-400e-bf24-6e084c82b37e
	I0122 12:11:14.965239   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.965508   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.965567   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.965567   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.965567   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.966110   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.966722   11512 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:14.966722   11512 pod_ready.go:81] duration metric: took 2.5234334s waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.966722   11512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.966722   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:11:14.966722   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.966722   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.966722   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.969329   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:14.969329   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.969329   11512 round_trippers.go:580]     Audit-Id: 0882ffd2-b56b-4d34-a3c6-95fcacc3ea13
	I0122 12:11:14.969329   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.969329   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.969835   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.969835   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.969835   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.970064   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"097d4c37-0438-4d04-9327-a80a8f986677","resourceVersion":"296","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.177.163:2379","kubernetes.io/config.hash":"95ebf92d1132dafd76b54aa40e5207c5","kubernetes.io/config.mirror":"95ebf92d1132dafd76b54aa40e5207c5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856983674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0122 12:11:14.970211   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:14.970211   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.970211   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.970211   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.975227   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:14.975227   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.975227   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.975227   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.975227   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.975434   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.975434   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.975434   11512 round_trippers.go:580]     Audit-Id: 09c06e0d-4cfa-4586-9eb3-c392064c2e84
	I0122 12:11:14.975772   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.976151   11512 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:14.976151   11512 pod_ready.go:81] duration metric: took 9.4297ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.976151   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.976151   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:11:14.976151   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.976151   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.976151   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.979511   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:14.979511   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.979511   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.979511   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.979511   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.979511   11512 round_trippers.go:580]     Audit-Id: 20344f12-00e1-42c1-bc42-525780f812dd
	I0122 12:11:14.979511   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.980042   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.980323   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"c10bf08a-e8a1-4355-8369-b54d7c5e6b68","resourceVersion":"294","creationTimestamp":"2024-01-22T12:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.177.163:8443","kubernetes.io/config.hash":"94caf781e9d5d6851ed3cfe41e8c6cd0","kubernetes.io/config.mirror":"94caf781e9d5d6851ed3cfe41e8c6cd0","kubernetes.io/config.seen":"2024-01-22T12:10:37.616243302Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0122 12:11:14.980413   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:14.980413   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.980413   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.980413   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.984010   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:14.984010   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.984010   11512 round_trippers.go:580]     Audit-Id: 4373531c-2608-4f4d-a961-f67bed0b97ed
	I0122 12:11:14.984010   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.984010   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.984010   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.984010   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.984010   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.984010   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.984010   11512 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:14.984010   11512 pod_ready.go:81] duration metric: took 7.8591ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.984010   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.984010   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:11:14.984010   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.984010   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.984010   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.989775   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:14.989775   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.989775   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.989775   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.989775   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.989775   11512 round_trippers.go:580]     Audit-Id: 4691cad4-55e2-4689-890c-4701539a7f70
	I0122 12:11:14.989775   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.990467   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.990753   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"327","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0122 12:11:14.990753   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:14.990753   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.990753   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.990753   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:14.996739   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:11:14.997811   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:14.997811   11512 round_trippers.go:580]     Audit-Id: c4a791c5-ab91-4c3f-8395-610524e5b7b4
	I0122 12:11:14.997811   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:14.997811   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:14.997811   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:14.997811   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:14.997811   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:14 GMT
	I0122 12:11:14.998038   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:14.998415   11512 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:14.998415   11512 pod_ready.go:81] duration metric: took 14.4042ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.998471   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:14.998529   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:11:14.998618   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:14.998618   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:14.998685   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.004747   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:11:15.005300   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.005300   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.005300   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.005373   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.005404   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.005470   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.005508   11512 round_trippers.go:580]     Audit-Id: 5e71e5ff-ecc6-4901-9cb7-308f92451c60
	I0122 12:11:15.005867   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"402","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0122 12:11:15.006260   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:15.006260   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.006260   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.006260   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.008870   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:11:15.008870   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.008870   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.008870   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.009760   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.009760   11512 round_trippers.go:580]     Audit-Id: 9311a1e9-3520-4d04-8cd9-7b238cbcbbe4
	I0122 12:11:15.009760   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.009760   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.009947   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:15.010594   11512 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:15.010728   11512 pod_ready.go:81] duration metric: took 12.2565ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:15.010728   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:15.161783   11512 request.go:629] Waited for 150.6165ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:11:15.161783   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:11:15.161783   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.161783   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.161783   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.166660   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:15.166698   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.166698   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.166698   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.166698   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.166698   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.166698   11512 round_trippers.go:580]     Audit-Id: 99949f95-e059-4894-84a0-ef2e090d1b41
	I0122 12:11:15.166698   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.166698   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"330","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0122 12:11:15.363890   11512 request.go:629] Waited for 196.0703ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:15.363890   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:11:15.363890   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.363890   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.363890   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.370765   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:11:15.370765   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.370765   11512 round_trippers.go:580]     Audit-Id: 185fac1f-d1a6-4429-ba28-561d1d82d894
	I0122 12:11:15.370765   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.370765   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.370765   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.370765   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.370765   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.371349   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0122 12:11:15.371576   11512 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:11:15.371576   11512 pod_ready.go:81] duration metric: took 360.8467ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:11:15.371576   11512 pod_ready.go:38] duration metric: took 2.9513522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:11:15.371576   11512 api_server.go:52] waiting for apiserver process to appear ...
	I0122 12:11:15.385307   11512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:11:15.405569   11512 command_runner.go:130] > 2045
	I0122 12:11:15.405569   11512 api_server.go:72] duration metric: took 14.6634003s to wait for apiserver process to appear ...
	I0122 12:11:15.405569   11512 api_server.go:88] waiting for apiserver healthz status ...
	I0122 12:11:15.405569   11512 api_server.go:253] Checking apiserver healthz at https://172.23.177.163:8443/healthz ...
	I0122 12:11:15.414304   11512 api_server.go:279] https://172.23.177.163:8443/healthz returned 200:
	ok
	I0122 12:11:15.414508   11512 round_trippers.go:463] GET https://172.23.177.163:8443/version
	I0122 12:11:15.414508   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.414508   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.414508   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.416233   11512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0122 12:11:15.416233   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.416233   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.416233   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.416586   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.416586   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.416586   11512 round_trippers.go:580]     Content-Length: 264
	I0122 12:11:15.416586   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.416586   11512 round_trippers.go:580]     Audit-Id: 283f4f4f-fc59-4046-8a1b-b69477a54fc7
	I0122 12:11:15.416586   11512 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0122 12:11:15.416675   11512 api_server.go:141] control plane version: v1.28.4
	I0122 12:11:15.416675   11512 api_server.go:131] duration metric: took 11.1051ms to wait for apiserver health ...
	I0122 12:11:15.416675   11512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 12:11:15.567388   11512 request.go:629] Waited for 150.7128ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:11:15.567388   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:11:15.567388   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.567388   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.567388   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.576952   11512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0122 12:11:15.576952   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.576952   11512 round_trippers.go:580]     Audit-Id: 8825497b-e7cc-4155-814a-3e0632ad1d06
	I0122 12:11:15.576952   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.576952   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.576952   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.576952   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.576952   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.578360   11512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"438","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0122 12:11:15.580884   11512 system_pods.go:59] 8 kube-system pods found
	I0122 12:11:15.580884   11512 system_pods.go:61] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "etcd-multinode-484200" [097d4c37-0438-4d04-9327-a80a8f986677] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "kube-apiserver-multinode-484200" [c10bf08a-e8a1-4355-8369-b54d7c5e6b68] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:11:15.580884   11512 system_pods.go:61] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:11:15.580884   11512 system_pods.go:74] duration metric: took 164.2085ms to wait for pod list to return data ...
	I0122 12:11:15.580884   11512 default_sa.go:34] waiting for default service account to be created ...
	I0122 12:11:15.769772   11512 request.go:629] Waited for 188.8868ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:11:15.769772   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:11:15.769772   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.769772   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.769772   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.773571   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:11:15.774523   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.774573   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.774573   11512 round_trippers.go:580]     Audit-Id: 2411f180-3bf2-4378-9529-b1fab111e3b2
	I0122 12:11:15.774573   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.774573   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.774573   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.774573   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.774573   11512 round_trippers.go:580]     Content-Length: 261
	I0122 12:11:15.774573   11512 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4124c850-901f-4684-9b66-62ca4c3085b4","resourceVersion":"367","creationTimestamp":"2024-01-22T12:10:59Z"}}]}
	I0122 12:11:15.774573   11512 default_sa.go:45] found service account: "default"
	I0122 12:11:15.775152   11512 default_sa.go:55] duration metric: took 194.2677ms for default service account to be created ...
	I0122 12:11:15.775152   11512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0122 12:11:15.958249   11512 request.go:629] Waited for 182.8702ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:11:15.958696   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:11:15.958696   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:15.958696   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:15.958696   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:15.963360   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:15.963763   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:15.963905   11512 round_trippers.go:580]     Audit-Id: 38b67b57-13be-4931-bc41-6b2833d9a560
	I0122 12:11:15.963905   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:15.963905   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:15.964025   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:15.964135   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:15.964230   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:15 GMT
	I0122 12:11:15.965201   11512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"438","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0122 12:11:15.968010   11512 system_pods.go:86] 8 kube-system pods found
	I0122 12:11:15.968010   11512 system_pods.go:89] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "etcd-multinode-484200" [097d4c37-0438-4d04-9327-a80a8f986677] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "kube-apiserver-multinode-484200" [c10bf08a-e8a1-4355-8369-b54d7c5e6b68] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:11:15.968010   11512 system_pods.go:89] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:11:15.968010   11512 system_pods.go:126] duration metric: took 192.8566ms to wait for k8s-apps to be running ...
	I0122 12:11:15.968010   11512 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:11:15.983821   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:11:16.004916   11512 system_svc.go:56] duration metric: took 36.9063ms WaitForService to wait for kubelet.
	I0122 12:11:16.005059   11512 kubeadm.go:581] duration metric: took 15.2628872s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:11:16.005059   11512 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:11:16.162150   11512 request.go:629] Waited for 156.6792ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes
	I0122 12:11:16.162266   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes
	I0122 12:11:16.162266   11512 round_trippers.go:469] Request Headers:
	I0122 12:11:16.162266   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:11:16.162612   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:11:16.167134   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:11:16.167134   11512 round_trippers.go:577] Response Headers:
	I0122 12:11:16.167134   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:11:16.167134   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:11:16.167487   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:11:16.167487   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:11:16 GMT
	I0122 12:11:16.167487   11512 round_trippers.go:580]     Audit-Id: ce17f243-039e-458a-b744-801be421d0b6
	I0122 12:11:16.167487   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:11:16.167684   11512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"421","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0122 12:11:16.168324   11512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:11:16.168396   11512 node_conditions.go:123] node cpu capacity is 2
	I0122 12:11:16.168396   11512 node_conditions.go:105] duration metric: took 163.3359ms to run NodePressure ...
	I0122 12:11:16.168396   11512 start.go:228] waiting for startup goroutines ...
	I0122 12:11:16.168396   11512 start.go:233] waiting for cluster config update ...
	I0122 12:11:16.168478   11512 start.go:242] writing updated cluster config ...
	I0122 12:11:16.170326   11512 out.go:177] 
	I0122 12:11:16.181042   11512 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:11:16.181042   11512 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:11:16.184399   11512 out.go:177] * Starting worker node multinode-484200-m02 in cluster multinode-484200
	I0122 12:11:16.185501   11512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:11:16.185501   11512 cache.go:56] Caching tarball of preloaded images
	I0122 12:11:16.186115   11512 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:11:16.186115   11512 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:11:16.186845   11512 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:11:16.195990   11512 start.go:365] acquiring machines lock for multinode-484200-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:11:16.196631   11512 start.go:369] acquired machines lock for "multinode-484200-m02" in 573.1µs
	I0122 12:11:16.196704   11512 start.go:93] Provisioning new machine with config: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:11:16.196704   11512 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0122 12:11:16.197276   11512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 12:11:16.197973   11512 start.go:159] libmachine.API.Create for "multinode-484200" (driver="hyperv")
	I0122 12:11:16.197973   11512 client.go:168] LocalClient.Create starting
	I0122 12:11:16.197973   11512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0122 12:11:16.198671   11512 main.go:141] libmachine: Decoding PEM data...
	I0122 12:11:16.198747   11512 main.go:141] libmachine: Parsing certificate...
	I0122 12:11:16.198747   11512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0122 12:11:16.198747   11512 main.go:141] libmachine: Decoding PEM data...
	I0122 12:11:16.198747   11512 main.go:141] libmachine: Parsing certificate...
	I0122 12:11:16.198747   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0122 12:11:18.105141   11512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0122 12:11:18.105320   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:18.105320   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0122 12:11:19.888057   11512 main.go:141] libmachine: [stdout =====>] : False
	
	I0122 12:11:19.888353   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:19.888353   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:11:21.433270   11512 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:11:21.433550   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:21.433550   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:11:25.159230   11512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:11:25.159503   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:25.161781   11512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0122 12:11:25.640041   11512 main.go:141] libmachine: Creating SSH key...
	I0122 12:11:25.750003   11512 main.go:141] libmachine: Creating VM...
	I0122 12:11:25.750003   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:11:28.752772   11512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:11:28.752965   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:28.753115   11512 main.go:141] libmachine: Using switch "Default Switch"
	I0122 12:11:28.753169   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:11:30.539607   11512 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:11:30.539607   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:30.539717   11512 main.go:141] libmachine: Creating VHD
	I0122 12:11:30.539717   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0122 12:11:34.352858   11512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 83973BF0-4DE8-418F-BECF-EC69192CDA7F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0122 12:11:34.352858   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:34.352858   11512 main.go:141] libmachine: Writing magic tar header
	I0122 12:11:34.352858   11512 main.go:141] libmachine: Writing SSH key tar header
	I0122 12:11:34.362878   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0122 12:11:37.572787   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:37.572870   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:37.572870   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\disk.vhd' -SizeBytes 20000MB
	I0122 12:11:40.118137   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:40.118692   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:40.118805   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-484200-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0122 12:11:43.707450   11512 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-484200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0122 12:11:43.707506   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:43.707506   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-484200-m02 -DynamicMemoryEnabled $false
	I0122 12:11:45.947808   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:45.947808   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:45.947808   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-484200-m02 -Count 2
	I0122 12:11:48.128731   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:48.128804   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:48.128866   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-484200-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\boot2docker.iso'
	I0122 12:11:50.734730   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:50.734824   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:50.734824   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-484200-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\disk.vhd'
	I0122 12:11:53.393007   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:53.393280   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:53.393280   11512 main.go:141] libmachine: Starting VM...
	I0122 12:11:53.393346   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200-m02
	I0122 12:11:56.316242   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:11:56.316242   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:56.316525   11512 main.go:141] libmachine: Waiting for host to start...
	I0122 12:11:56.316642   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:11:58.642163   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:11:58.642163   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:11:58.642265   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:01.207018   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:12:01.207168   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:02.212658   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:04.460615   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:04.460891   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:04.460891   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:07.007558   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:12:07.007662   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:08.014064   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:10.228767   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:10.228767   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:10.228767   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:12.793125   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:12:12.793125   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:13.800065   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:16.061568   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:16.061568   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:16.061700   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:18.634205   11512 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:12:18.634205   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:19.648924   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:21.902528   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:21.902528   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:21.902528   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:24.518023   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:24.518023   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:24.518023   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:26.704871   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:26.704871   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:26.704954   11512 machine.go:88] provisioning docker machine ...
	I0122 12:12:26.704954   11512 buildroot.go:166] provisioning hostname "multinode-484200-m02"
	I0122 12:12:26.705102   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:28.902881   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:28.902881   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:28.903020   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:31.433421   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:31.433421   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:31.439285   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:12:31.447592   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:12:31.447592   11512 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200-m02 && echo "multinode-484200-m02" | sudo tee /etc/hostname
	I0122 12:12:31.614902   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200-m02
	
	I0122 12:12:31.615003   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:33.806543   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:33.806543   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:33.806706   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:36.348768   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:36.348949   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:36.354862   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:12:36.355594   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:12:36.355594   11512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:12:36.528285   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:12:36.528285   11512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:12:36.528285   11512 buildroot.go:174] setting up certificates
	I0122 12:12:36.528445   11512 provision.go:83] configureAuth start
	I0122 12:12:36.528514   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:38.709651   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:38.709730   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:38.709730   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:41.301345   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:41.301529   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:41.301751   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:43.452399   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:43.452612   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:43.452612   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:46.034365   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:46.034676   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:46.034676   11512 provision.go:138] copyHostCerts
	I0122 12:12:46.034934   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:12:46.035468   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:12:46.035468   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:12:46.036188   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:12:46.037799   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:12:46.038300   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:12:46.038300   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:12:46.038819   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:12:46.040224   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:12:46.040657   11512 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:12:46.040657   11512 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:12:46.041135   11512 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:12:46.042028   11512 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200-m02 san=[172.23.185.74 172.23.185.74 localhost 127.0.0.1 minikube multinode-484200-m02]
	I0122 12:12:46.268010   11512 provision.go:172] copyRemoteCerts
	I0122 12:12:46.280101   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:12:46.280101   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:48.441064   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:48.441286   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:48.441421   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:51.047319   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:51.047772   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:51.047854   11512 sshutil.go:53] new ssh client: &{IP:172.23.185.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:12:51.172220   11512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8919841s)
	I0122 12:12:51.172287   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:12:51.172803   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:12:51.214332   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:12:51.214896   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0122 12:12:51.261555   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:12:51.262122   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 12:12:51.300395   11512 provision.go:86] duration metric: configureAuth took 14.7718439s
	I0122 12:12:51.300395   11512 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:12:51.300993   11512 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:12:51.301050   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:53.482616   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:53.482695   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:53.482695   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:12:56.072807   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:12:56.073296   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:56.079099   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:12:56.079805   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:12:56.079805   11512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:12:56.237657   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:12:56.237657   11512 buildroot.go:70] root file system type: tmpfs
	I0122 12:12:56.237657   11512 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:12:56.237657   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:12:58.413758   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:12:58.413758   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:12:58.413855   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:00.959927   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:00.959927   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:00.968068   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:13:00.968726   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:13:00.969292   11512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.177.163"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:13:01.144392   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.177.163
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:13:01.144392   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:03.284623   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:03.284804   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:03.284894   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:05.857065   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:05.857065   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:05.863098   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:13:05.863855   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:13:05.863855   11512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:13:06.872858   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:13:06.873251   11512 machine.go:91] provisioned docker machine in 40.1680423s
	I0122 12:13:06.873251   11512 client.go:171] LocalClient.Create took 1m50.6747799s
	I0122 12:13:06.873411   11512 start.go:167] duration metric: libmachine.API.Create for "multinode-484200" took 1m50.6749397s
	I0122 12:13:06.873508   11512 start.go:300] post-start starting for "multinode-484200-m02" (driver="hyperv")
	I0122 12:13:06.873508   11512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:13:06.886684   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:13:06.886684   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:09.031181   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:09.031181   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:09.031181   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:11.621606   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:11.621742   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:11.622412   11512 sshutil.go:53] new ssh client: &{IP:172.23.185.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:13:11.729491   11512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8427853s)
	I0122 12:13:11.749018   11512 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:13:11.755657   11512 command_runner.go:130] > NAME=Buildroot
	I0122 12:13:11.755657   11512 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:13:11.755657   11512 command_runner.go:130] > ID=buildroot
	I0122 12:13:11.755657   11512 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:13:11.755657   11512 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:13:11.755657   11512 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:13:11.755657   11512 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:13:11.756346   11512 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:13:11.757198   11512 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:13:11.757198   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:13:11.771542   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:13:11.786962   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:13:11.822832   11512 start.go:303] post-start completed in 4.9493021s
	I0122 12:13:11.825797   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:13.963149   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:13.963233   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:13.963325   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:16.552832   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:16.552968   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:16.553272   11512 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:13:16.556121   11512 start.go:128] duration metric: createHost completed in 2m0.3587957s
	I0122 12:13:16.556198   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:18.676242   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:18.676422   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:18.676422   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:21.192333   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:21.192429   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:21.198295   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:13:21.199124   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:13:21.199124   11512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 12:13:21.355045   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705925601.349016780
	
	I0122 12:13:21.355198   11512 fix.go:206] guest clock: 1705925601.349016780
	I0122 12:13:21.355255   11512 fix.go:219] Guest: 2024-01-22 12:13:21.34901678 +0000 UTC Remote: 2024-01-22 12:13:16.5561213 +0000 UTC m=+327.734805801 (delta=4.79289548s)
	I0122 12:13:21.355255   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:23.550217   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:23.550217   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:23.550370   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:26.093692   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:26.093939   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:26.099486   11512 main.go:141] libmachine: Using SSH client type: native
	I0122 12:13:26.100256   11512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.185.74 22 <nil> <nil>}
	I0122 12:13:26.100256   11512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705925601
	I0122 12:13:26.269503   11512 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:13:21 UTC 2024
	
	I0122 12:13:26.269698   11512 fix.go:226] clock set: Mon Jan 22 12:13:21 UTC 2024
	 (err=<nil>)
	I0122 12:13:26.269698   11512 start.go:83] releasing machines lock for "multinode-484200-m02", held for 2m10.0724821s
	I0122 12:13:26.269956   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:28.414463   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:28.414559   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:28.414623   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:31.026212   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:31.026212   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:31.026942   11512 out.go:177] * Found network options:
	I0122 12:13:31.028056   11512 out.go:177]   - NO_PROXY=172.23.177.163
	W0122 12:13:31.028499   11512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:13:31.029269   11512 out.go:177]   - NO_PROXY=172.23.177.163
	W0122 12:13:31.030284   11512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:13:31.031942   11512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:13:31.033652   11512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:13:31.034178   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:31.049766   11512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:13:31.049766   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:13:33.286432   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:33.286628   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:33.286736   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:33.290651   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:33.290651   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:33.290651   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:35.955355   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:35.955355   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:35.955355   11512 sshutil.go:53] new ssh client: &{IP:172.23.185.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:13:35.986974   11512 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:13:35.988096   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:35.988309   11512 sshutil.go:53] new ssh client: &{IP:172.23.185.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:13:36.071961   11512 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0122 12:13:36.072549   11512 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0226561s)
	W0122 12:13:36.072596   11512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:13:36.086477   11512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:13:36.158668   11512 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:13:36.158668   11512 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:13:36.158668   11512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1249939s)
	I0122 12:13:36.158668   11512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:13:36.158668   11512 start.go:475] detecting cgroup driver to use...
	I0122 12:13:36.158978   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:13:36.187225   11512 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:13:36.200515   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:13:36.227757   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:13:36.245544   11512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:13:36.258933   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:13:36.286138   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:13:36.314699   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:13:36.343794   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:13:36.372296   11512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:13:36.401422   11512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:13:36.430782   11512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:13:36.445351   11512 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:13:36.459055   11512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:13:36.486780   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:13:36.651834   11512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:13:36.680480   11512 start.go:475] detecting cgroup driver to use...
	I0122 12:13:36.694841   11512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:13:36.714336   11512 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:13:36.714336   11512 command_runner.go:130] > [Unit]
	I0122 12:13:36.714336   11512 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:13:36.714336   11512 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:13:36.714439   11512 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:13:36.714439   11512 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:13:36.714439   11512 command_runner.go:130] > StartLimitBurst=3
	I0122 12:13:36.714439   11512 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:13:36.714439   11512 command_runner.go:130] > [Service]
	I0122 12:13:36.714512   11512 command_runner.go:130] > Type=notify
	I0122 12:13:36.714512   11512 command_runner.go:130] > Restart=on-failure
	I0122 12:13:36.714512   11512 command_runner.go:130] > Environment=NO_PROXY=172.23.177.163
	I0122 12:13:36.714590   11512 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:13:36.714708   11512 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:13:36.714708   11512 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:13:36.714774   11512 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:13:36.714774   11512 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:13:36.714824   11512 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:13:36.714858   11512 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:13:36.714858   11512 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:13:36.714858   11512 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:13:36.714958   11512 command_runner.go:130] > ExecStart=
	I0122 12:13:36.714958   11512 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:13:36.715043   11512 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:13:36.715095   11512 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:13:36.715144   11512 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:13:36.715144   11512 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:13:36.715144   11512 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:13:36.715144   11512 command_runner.go:130] > LimitCORE=infinity
	I0122 12:13:36.715227   11512 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:13:36.715252   11512 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:13:36.715281   11512 command_runner.go:130] > TasksMax=infinity
	I0122 12:13:36.715281   11512 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:13:36.715281   11512 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:13:36.715281   11512 command_runner.go:130] > Delegate=yes
	I0122 12:13:36.715281   11512 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:13:36.715281   11512 command_runner.go:130] > KillMode=process
	I0122 12:13:36.715281   11512 command_runner.go:130] > [Install]
	I0122 12:13:36.715281   11512 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:13:36.732305   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:13:36.764359   11512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:13:36.807041   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:13:36.842393   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:13:36.877720   11512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:13:36.933918   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:13:36.954321   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:13:36.980295   11512 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:13:36.996856   11512 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:13:37.002317   11512 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:13:37.018136   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:13:37.033630   11512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:13:37.073110   11512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:13:37.249750   11512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:13:37.407885   11512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:13:37.408084   11512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:13:37.454158   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:13:37.625617   11512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:13:39.115265   11512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4896419s)
	I0122 12:13:39.127792   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:13:39.164877   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:13:39.197065   11512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:13:39.369094   11512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:13:39.530819   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:13:39.703927   11512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:13:39.744949   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:13:39.777930   11512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:13:39.961460   11512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:13:40.080056   11512 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:13:40.096960   11512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:13:40.110338   11512 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:13:40.110421   11512 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:13:40.110421   11512 command_runner.go:130] > Device: 16h/22d	Inode: 865         Links: 1
	I0122 12:13:40.110421   11512 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:13:40.110421   11512 command_runner.go:130] > Access: 2024-01-22 12:13:39.976994472 +0000
	I0122 12:13:40.110421   11512 command_runner.go:130] > Modify: 2024-01-22 12:13:39.976994472 +0000
	I0122 12:13:40.110421   11512 command_runner.go:130] > Change: 2024-01-22 12:13:39.980994472 +0000
	I0122 12:13:40.110421   11512 command_runner.go:130] >  Birth: -
	I0122 12:13:40.110557   11512 start.go:543] Will wait 60s for crictl version
	I0122 12:13:40.123816   11512 ssh_runner.go:195] Run: which crictl
	I0122 12:13:40.129934   11512 command_runner.go:130] > /usr/bin/crictl
	I0122 12:13:40.143192   11512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:13:40.226416   11512 command_runner.go:130] > Version:  0.1.0
	I0122 12:13:40.226899   11512 command_runner.go:130] > RuntimeName:  docker
	I0122 12:13:40.226899   11512 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:13:40.226899   11512 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:13:40.226966   11512 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:13:40.238819   11512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:13:40.276891   11512 command_runner.go:130] > 24.0.7
	I0122 12:13:40.288125   11512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:13:40.321439   11512 command_runner.go:130] > 24.0.7
	I0122 12:13:40.323445   11512 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:13:40.324311   11512 out.go:177]   - env NO_PROXY=172.23.177.163
	I0122 12:13:40.325091   11512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:13:40.330220   11512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:13:40.330220   11512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:13:40.330220   11512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:13:40.330744   11512 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:13:40.333546   11512 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:13:40.333546   11512 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:13:40.347078   11512 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:13:40.352677   11512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:13:40.371125   11512 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.185.74
	I0122 12:13:40.371125   11512 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:13:40.371960   11512 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:13:40.372239   11512 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:13:40.372239   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:13:40.372239   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:13:40.372889   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:13:40.373156   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:13:40.373376   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:13:40.373376   11512 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:13:40.373952   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:13:40.374374   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:13:40.374593   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:13:40.374929   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:13:40.375159   11512 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:13:40.375159   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:13:40.375159   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:13:40.375734   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:13:40.376110   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:13:40.413784   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:13:40.453726   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:13:40.496816   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:13:40.535413   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:13:40.577715   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:13:40.621702   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:13:40.677826   11512 ssh_runner.go:195] Run: openssl version
	I0122 12:13:40.688790   11512 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:13:40.702079   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:13:40.732651   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:13:40.739807   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:13:40.740049   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:13:40.750711   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:13:40.759177   11512 command_runner.go:130] > b5213941
	I0122 12:13:40.773367   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:13:40.805034   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:13:40.834178   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:13:40.840981   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:13:40.840981   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:13:40.855996   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:13:40.863879   11512 command_runner.go:130] > 51391683
	I0122 12:13:40.882649   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:13:40.912918   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:13:40.947312   11512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:13:40.956858   11512 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:13:40.956858   11512 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:13:40.972532   11512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:13:40.981690   11512 command_runner.go:130] > 3ec20f2e
	I0122 12:13:40.997238   11512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:13:41.028150   11512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:13:41.034010   11512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:13:41.034090   11512 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:13:41.045731   11512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:13:41.085757   11512 command_runner.go:130] > cgroupfs
	I0122 12:13:41.085757   11512 cni.go:84] Creating CNI manager for ""
	I0122 12:13:41.085757   11512 cni.go:136] 2 nodes found, recommending kindnet
	I0122 12:13:41.085757   11512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:13:41.085757   11512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.185.74 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.177.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.185.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:13:41.085757   11512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.185.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200-m02"
	  kubeletExtraArgs:
	    node-ip: 172.23.185.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:13:41.085757   11512 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.185.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:13:41.098769   11512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:13:41.114283   11512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0122 12:13:41.115278   11512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0122 12:13:41.128953   11512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0122 12:13:41.147830   11512 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0122 12:13:41.147830   11512 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0122 12:13:41.147830   11512 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0122 12:13:42.055209   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0122 12:13:42.068174   11512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0122 12:13:42.075298   11512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0122 12:13:42.075298   11512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0122 12:13:42.076370   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0122 12:13:42.480389   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0122 12:13:42.497370   11512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0122 12:13:42.524030   11512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0122 12:13:42.524868   11512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0122 12:13:42.524868   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0122 12:13:43.814942   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:13:43.837848   11512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0122 12:13:43.850571   11512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0122 12:13:43.857375   11512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0122 12:13:43.857375   11512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0122 12:13:43.857375   11512 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0122 12:13:44.654590   11512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0122 12:13:44.669846   11512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0122 12:13:44.698233   11512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:13:44.737986   11512 ssh_runner.go:195] Run: grep 172.23.177.163	control-plane.minikube.internal$ /etc/hosts
	I0122 12:13:44.743344   11512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.177.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:13:44.764100   11512 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:13:44.764397   11512 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:13:44.764931   11512 start.go:304] JoinCluster: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:13:44.764981   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0122 12:13:44.764981   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:13:46.960263   11512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:13:46.960500   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:46.960706   11512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:13:49.479768   11512 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:13:49.480023   11512 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:13:49.482905   11512 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:13:49.681591   11512 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mvwql4.4kcszj6g0dtg0ayd --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:13:49.681792   11512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9167882s)
	I0122 12:13:49.681825   11512 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:13:49.681963   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mvwql4.4kcszj6g0dtg0ayd --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02"
	I0122 12:13:49.738553   11512 command_runner.go:130] ! W0122 12:13:49.732758    1353 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0122 12:13:49.909243   11512 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:13:52.691257   11512 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:13:52.691396   11512 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0122 12:13:52.691396   11512 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0122 12:13:52.691396   11512 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:13:52.691463   11512 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:13:52.691463   11512 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:13:52.691496   11512 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0122 12:13:52.691496   11512 command_runner.go:130] > This node has joined the cluster:
	I0122 12:13:52.691570   11512 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0122 12:13:52.691570   11512 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0122 12:13:52.691619   11512 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0122 12:13:52.691700   11512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mvwql4.4kcszj6g0dtg0ayd --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02": (3.0096417s)
	I0122 12:13:52.691818   11512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0122 12:13:52.864280   11512 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0122 12:13:53.040052   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_13_53_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:13:53.165766   11512 command_runner.go:130] > node/multinode-484200-m02 labeled
	I0122 12:13:53.166751   11512 start.go:306] JoinCluster complete in 8.4017826s
	I0122 12:13:53.166751   11512 cni.go:84] Creating CNI manager for ""
	I0122 12:13:53.166751   11512 cni.go:136] 2 nodes found, recommending kindnet
	I0122 12:13:53.179747   11512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:13:53.188674   11512 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:13:53.188674   11512 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:13:53.188674   11512 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:13:53.188674   11512 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:13:53.188780   11512 command_runner.go:130] > Access: 2024-01-22 12:08:58.504080900 +0000
	I0122 12:13:53.188780   11512 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:13:53.188833   11512 command_runner.go:130] > Change: 2024-01-22 12:08:50.118000000 +0000
	I0122 12:13:53.188862   11512 command_runner.go:130] >  Birth: -
	I0122 12:13:53.189041   11512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:13:53.189157   11512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:13:53.238871   11512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:13:53.626877   11512 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:13:53.626877   11512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:13:53.626877   11512 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:13:53.626877   11512 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:13:53.627894   11512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:13:53.628872   11512 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.177.163:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:13:53.629871   11512 round_trippers.go:463] GET https://172.23.177.163:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:13:53.629871   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:53.629871   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:53.629871   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:53.644632   11512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0122 12:13:53.644910   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:53.644910   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:53 GMT
	I0122 12:13:53.645000   11512 round_trippers.go:580]     Audit-Id: 4ec6bc0f-36a0-4f53-b9d3-b51c55a5ba3f
	I0122 12:13:53.645035   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:53.645035   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:53.645035   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:53.645035   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:53.645118   11512 round_trippers.go:580]     Content-Length: 291
	I0122 12:13:53.645182   11512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"442","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:13:53.645182   11512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:13:53.645182   11512 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:13:53.646343   11512 out.go:177] * Verifying Kubernetes components...
	I0122 12:13:53.660805   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:13:53.681943   11512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:13:53.682746   11512 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.177.163:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:13:53.683811   11512 node_ready.go:35] waiting up to 6m0s for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:13:53.683970   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:53.683970   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:53.684024   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:53.684024   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:53.687398   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:53.687398   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:53.687398   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:53.687398   11512 round_trippers.go:580]     Content-Length: 4036
	I0122 12:13:53.687398   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:53 GMT
	I0122 12:13:53.687398   11512 round_trippers.go:580]     Audit-Id: 359511ad-a52f-4e5f-ac77-f3305d7ea0d0
	I0122 12:13:53.687506   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:53.687506   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:53.687506   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:53.687596   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"596","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3012 chars]
	I0122 12:13:54.190517   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:54.190586   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:54.190586   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:54.190586   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:54.195161   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:54.195193   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:54.195193   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:54.195193   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:54.195193   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:54.195193   11512 round_trippers.go:580]     Content-Length: 4036
	I0122 12:13:54.195193   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:54 GMT
	I0122 12:13:54.195338   11512 round_trippers.go:580]     Audit-Id: 693c37c2-f4e3-4729-85b6-7f8c12a9495b
	I0122 12:13:54.195338   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:54.195436   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"596","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3012 chars]
	I0122 12:13:54.690553   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:54.690553   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:54.690553   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:54.690553   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:54.695183   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:54.695183   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:54.695251   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:54.695251   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:54.695251   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:54.695251   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:54 GMT
	I0122 12:13:54.695320   11512 round_trippers.go:580]     Audit-Id: c613f9da-4578-43b9-8a14-0c6192d470aa
	I0122 12:13:54.695320   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:54.695585   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:55.193775   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:55.193775   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:55.193863   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:55.193863   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:55.198242   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:55.198485   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:55.198485   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:55 GMT
	I0122 12:13:55.198485   11512 round_trippers.go:580]     Audit-Id: a2b57b7c-2309-45b9-98b8-182c9865b4ac
	I0122 12:13:55.198485   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:55.198485   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:55.198574   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:55.198574   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:55.198843   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:55.698477   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:55.698477   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:55.698477   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:55.698477   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:55.702137   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:55.702884   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:55.702884   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:55.702884   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:55 GMT
	I0122 12:13:55.702884   11512 round_trippers.go:580]     Audit-Id: a9cd1d93-24b0-486d-bbd2-658c5fff5bab
	I0122 12:13:55.702884   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:55.702884   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:55.702884   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:55.702884   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:55.703842   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:13:56.186210   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:56.186210   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:56.186301   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:56.186301   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:56.189702   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:56.189702   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:56.189702   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:56.189702   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:56.189702   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:56 GMT
	I0122 12:13:56.189702   11512 round_trippers.go:580]     Audit-Id: d928db0d-74b5-4aee-adb1-fc07480b34e5
	I0122 12:13:56.189702   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:56.189702   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:56.190692   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:56.689654   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:56.689654   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:56.689654   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:56.689654   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:56.694178   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:56.694178   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:56.694178   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:56.694178   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:56.694178   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:56.694397   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:56.694397   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:56 GMT
	I0122 12:13:56.694397   11512 round_trippers.go:580]     Audit-Id: ee030c8d-1940-48f4-ac9b-dcecf95a4291
	I0122 12:13:56.695007   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:57.190590   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:57.190698   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:57.190698   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:57.190698   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:57.195102   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:57.195260   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:57.195260   11512 round_trippers.go:580]     Audit-Id: afbb3846-916f-40cf-9b11-cbb9525664b4
	I0122 12:13:57.195260   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:57.195260   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:57.195260   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:57.195260   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:57.195260   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:57 GMT
	I0122 12:13:57.195490   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:57.684530   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:57.684530   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:57.684530   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:57.684530   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:57.687945   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:57.687945   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:57.687945   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:57.687945   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:57.687945   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:57.687945   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:57.687945   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:57 GMT
	I0122 12:13:57.687945   11512 round_trippers.go:580]     Audit-Id: 7c8930f5-39f5-4ace-9c4b-85879cb48fb8
	I0122 12:13:57.688874   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:58.193377   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:58.193457   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:58.193457   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:58.193457   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:58.197332   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:58.197332   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:58.197332   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:58.197465   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:58.197465   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:58.197465   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:58.197465   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:58 GMT
	I0122 12:13:58.197573   11512 round_trippers.go:580]     Audit-Id: 8032bab8-4eec-4d5e-b768-f3da59dfd3ee
	I0122 12:13:58.197872   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:58.198368   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:13:58.685058   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:58.685058   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:58.685058   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:58.685058   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:58.689140   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:58.689639   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:58.689639   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:58.689639   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:58.689639   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:58 GMT
	I0122 12:13:58.689639   11512 round_trippers.go:580]     Audit-Id: 28fb4dd5-f37d-4156-98ec-ba21512ee655
	I0122 12:13:58.689639   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:58.689639   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:58.689875   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:59.191986   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:59.192046   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:59.192046   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:59.192106   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:59.195450   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:13:59.195450   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:59.195832   11512 round_trippers.go:580]     Audit-Id: af1036cf-6fb3-429e-a496-551277c26099
	I0122 12:13:59.195832   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:59.195832   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:59.195894   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:59.195894   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:59.195894   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:59 GMT
	I0122 12:13:59.195894   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:13:59.700430   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:13:59.700430   11512 round_trippers.go:469] Request Headers:
	I0122 12:13:59.700430   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:13:59.700430   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:13:59.705070   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:13:59.705223   11512 round_trippers.go:577] Response Headers:
	I0122 12:13:59.705223   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:13:59.705223   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:13:59.705223   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:13:59.705223   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:13:59 GMT
	I0122 12:13:59.705223   11512 round_trippers.go:580]     Audit-Id: d44d06d3-bce4-4116-9071-2b017044bf77
	I0122 12:13:59.705328   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:13:59.705549   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:00.189604   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:00.189604   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:00.189604   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:00.189604   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:00.194679   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:14:00.194679   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:00.194679   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:00.195626   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:00.195626   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:00.195626   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:00 GMT
	I0122 12:14:00.195626   11512 round_trippers.go:580]     Audit-Id: ce9f47fa-98e4-4c05-9952-bdc4c2c5e480
	I0122 12:14:00.195626   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:00.195806   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:00.697614   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:00.697672   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:00.697672   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:00.697730   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:00.705929   11512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:14:00.705929   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:00.705929   11512 round_trippers.go:580]     Audit-Id: 28f0b1f1-c397-4b58-88ab-50c86d4acf8d
	I0122 12:14:00.705929   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:00.705929   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:00.705929   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:00.705929   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:00.705929   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:00 GMT
	I0122 12:14:00.705929   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:00.706914   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:14:01.188798   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:01.188798   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:01.188798   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:01.188798   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:01.295097   11512 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0122 12:14:01.295097   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:01.295990   11512 round_trippers.go:580]     Audit-Id: 3a1f77f5-4381-49a8-b783-84ce954667ea
	I0122 12:14:01.295990   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:01.295990   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:01.296227   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:01.296227   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:01.296227   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:01 GMT
	I0122 12:14:01.296752   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:01.690531   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:01.690626   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:01.690626   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:01.690626   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:01.694961   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:01.695517   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:01.695517   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:01.695517   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:01 GMT
	I0122 12:14:01.695517   11512 round_trippers.go:580]     Audit-Id: b23eb22e-b80f-47bf-a122-091b19bb460b
	I0122 12:14:01.695517   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:01.695517   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:01.695517   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:01.695517   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:02.195387   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:02.195387   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:02.195387   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:02.195387   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:02.199509   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:02.199509   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:02.199509   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:02.199509   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:02 GMT
	I0122 12:14:02.199509   11512 round_trippers.go:580]     Audit-Id: f6392545-ca3c-44f6-8381-75fcc0c2da66
	I0122 12:14:02.199509   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:02.199509   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:02.199509   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:02.199710   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:02.689605   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:02.689889   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:02.689944   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:02.689944   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:02.699332   11512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0122 12:14:02.699332   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:02.699332   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:02.699332   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:02.699332   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:02 GMT
	I0122 12:14:02.699332   11512 round_trippers.go:580]     Audit-Id: 3323dbf4-dc12-4c2c-af26-6aa06eabf3b2
	I0122 12:14:02.699332   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:02.699332   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:02.700323   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"598","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0122 12:14:03.198632   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:03.198632   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:03.198632   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:03.198632   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:03.205001   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:14:03.205001   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:03.205001   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:03.205001   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:03.205001   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:03 GMT
	I0122 12:14:03.205001   11512 round_trippers.go:580]     Audit-Id: 71887bd1-912d-4ba1-a653-820cf5306a3b
	I0122 12:14:03.205001   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:03.205001   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:03.205808   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:03.205808   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:14:03.692435   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:03.692532   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:03.692599   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:03.692599   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:03.695873   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:03.696649   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:03.696649   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:03 GMT
	I0122 12:14:03.696649   11512 round_trippers.go:580]     Audit-Id: e7878355-b34b-4616-8a34-44963f65d445
	I0122 12:14:03.696717   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:03.696717   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:03.696717   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:03.696772   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:03.696826   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:04.198415   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:04.198477   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:04.198477   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:04.198477   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:04.204907   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:14:04.205229   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:04.205229   11512 round_trippers.go:580]     Audit-Id: c577afd4-3c07-4d5b-b229-d7e1eef1a8a3
	I0122 12:14:04.205229   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:04.205229   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:04.205229   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:04.205229   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:04.205326   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:04 GMT
	I0122 12:14:04.205545   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:04.690843   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:04.690998   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:04.690998   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:04.690998   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:04.694910   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:04.694910   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:04.694910   11512 round_trippers.go:580]     Audit-Id: 83be2329-82d9-462d-ab58-a09f5d7f5138
	I0122 12:14:04.694910   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:04.694910   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:04.694910   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:04.694910   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:04.694910   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:04 GMT
	I0122 12:14:04.694910   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:05.200251   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:05.200367   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:05.200367   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:05.200367   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:05.205917   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:14:05.205917   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:05.205917   11512 round_trippers.go:580]     Audit-Id: dc7a1cc7-0433-4f2c-ae9a-899559f10d38
	I0122 12:14:05.206155   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:05.206155   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:05.206155   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:05.206155   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:05.206155   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:05 GMT
	I0122 12:14:05.206334   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:05.207434   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:14:05.691584   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:05.691584   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:05.691584   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:05.691584   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:05.697383   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:14:05.697383   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:05.697383   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:05 GMT
	I0122 12:14:05.697383   11512 round_trippers.go:580]     Audit-Id: 5108f9ec-2968-4177-be23-291313d3d586
	I0122 12:14:05.697383   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:05.697383   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:05.697383   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:05.697513   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:05.697819   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:06.197005   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:06.197149   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:06.197149   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:06.197149   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:06.201912   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:06.201976   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:06.201976   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:06.201976   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:06.202072   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:06.202072   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:06.202072   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:06 GMT
	I0122 12:14:06.202072   11512 round_trippers.go:580]     Audit-Id: 75fcfbb2-b9c8-4fc0-a32b-7e9b05fe8264
	I0122 12:14:06.202469   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:06.688126   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:06.688176   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:06.688176   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:06.688176   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:06.695477   11512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:14:06.695732   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:06.695732   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:06.695732   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:06.695732   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:06.695732   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:06.695732   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:06 GMT
	I0122 12:14:06.695809   11512 round_trippers.go:580]     Audit-Id: 281c4127-c6a1-4b4a-b5d8-f639a269463e
	I0122 12:14:06.695809   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:07.190569   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:07.190569   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:07.190569   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:07.190569   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:07.195525   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:07.195525   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:07.195683   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:07 GMT
	I0122 12:14:07.195683   11512 round_trippers.go:580]     Audit-Id: 11795717-ca34-4339-b329-61e36d0c9ba8
	I0122 12:14:07.195683   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:07.195683   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:07.195683   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:07.195683   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:07.195933   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:07.693289   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:07.693289   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:07.693289   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:07.693289   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:07.697303   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:07.697622   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:07.697622   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:07.697622   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:07 GMT
	I0122 12:14:07.697622   11512 round_trippers.go:580]     Audit-Id: 746bef6f-987c-4a4d-8335-d43005616900
	I0122 12:14:07.697695   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:07.697746   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:07.697746   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:07.698103   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:07.698664   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:14:08.199878   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:08.199975   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:08.199975   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:08.199975   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:08.204691   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:08.204967   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:08.204967   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:08.204967   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:08.204967   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:08 GMT
	I0122 12:14:08.204967   11512 round_trippers.go:580]     Audit-Id: 913600a6-af00-4653-9228-405323db9f27
	I0122 12:14:08.204967   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:08.205047   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:08.205229   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:08.686001   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:08.686070   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:08.686070   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:08.686070   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:08.690286   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:08.690286   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:08.690286   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:08.690286   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:08 GMT
	I0122 12:14:08.690286   11512 round_trippers.go:580]     Audit-Id: 8b275a34-898e-4de8-9c9d-766492746438
	I0122 12:14:08.690286   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:08.690286   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:08.690286   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:08.690549   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:09.186281   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:09.186367   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:09.186367   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:09.186528   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:09.190228   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:09.190395   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:09.190395   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:09.190395   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:09.190395   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:09 GMT
	I0122 12:14:09.190610   11512 round_trippers.go:580]     Audit-Id: 9f494013-0966-4174-8ed8-230d4f2a305c
	I0122 12:14:09.190689   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:09.190716   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:09.191104   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:09.700115   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:09.700179   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:09.700179   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:09.700179   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:09.705580   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:14:09.705580   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:09.705580   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:09.705975   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:09.705975   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:09.705975   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:09.705975   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:09 GMT
	I0122 12:14:09.705975   11512 round_trippers.go:580]     Audit-Id: 8e944b06-54cb-4ab5-8e7d-87b71462f32d
	I0122 12:14:09.706297   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"616","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0122 12:14:09.706565   11512 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:14:10.185182   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:10.185182   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.185182   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.185182   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.191271   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:14:10.192185   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.192185   11512 round_trippers.go:580]     Audit-Id: ad72d4a2-c040-4473-93fc-6b3faa38fccb
	I0122 12:14:10.192185   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.192185   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.192239   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.192239   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.192239   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.192509   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"629","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I0122 12:14:10.192669   11512 node_ready.go:49] node "multinode-484200-m02" has status "Ready":"True"
	I0122 12:14:10.192669   11512 node_ready.go:38] duration metric: took 16.5087836s waiting for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:14:10.192669   11512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:14:10.192669   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods
	I0122 12:14:10.192669   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.192669   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.192669   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.203742   11512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0122 12:14:10.203742   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.203742   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.204291   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.204291   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.204291   11512 round_trippers.go:580]     Audit-Id: 85325fa2-82d0-464e-927d-cb3aaab8ccf4
	I0122 12:14:10.204291   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.204291   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.206215   11512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"629"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"438","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67514 chars]
	I0122 12:14:10.210020   11512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.210255   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:14:10.210255   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.210360   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.210360   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.212569   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:14:10.212569   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.212569   11512 round_trippers.go:580]     Audit-Id: 8b918ed3-3a83-47d6-8754-802643a84baf
	I0122 12:14:10.212569   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.212569   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.212569   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.212569   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.212569   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.213545   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"438","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0122 12:14:10.214312   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.214370   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.214370   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.214370   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.221759   11512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:14:10.221759   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.221759   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.221759   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.221759   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.221759   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.221759   11512 round_trippers.go:580]     Audit-Id: 3671190b-d402-4efa-96af-aeb79c3d7c17
	I0122 12:14:10.221759   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.222669   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:10.222669   11512 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.222669   11512 pod_ready.go:81] duration metric: took 12.6487ms waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.222669   11512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.222669   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:14:10.222669   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.222669   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.222669   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.225452   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:14:10.225452   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.225452   11512 round_trippers.go:580]     Audit-Id: 107a3ea0-2a95-45ab-b60c-f77bb04db102
	I0122 12:14:10.225452   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.225452   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.225452   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.225452   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.225452   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.226431   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"097d4c37-0438-4d04-9327-a80a8f986677","resourceVersion":"296","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.177.163:2379","kubernetes.io/config.hash":"95ebf92d1132dafd76b54aa40e5207c5","kubernetes.io/config.mirror":"95ebf92d1132dafd76b54aa40e5207c5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856983674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0122 12:14:10.226431   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.226431   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.226431   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.226431   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.229808   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:10.229808   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.229808   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.229959   11512 round_trippers.go:580]     Audit-Id: e3fa1d09-bb43-4ece-9ccd-7f0766aadb70
	I0122 12:14:10.229959   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.229959   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.229959   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.229959   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.230019   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:10.230019   11512 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.230019   11512 pod_ready.go:81] duration metric: took 7.3499ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.230019   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.230673   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:14:10.230673   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.230673   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.230673   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.233538   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:14:10.233538   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.233538   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.233538   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.233538   11512 round_trippers.go:580]     Audit-Id: c53a429c-c01e-4bb5-b3f2-781175b5fafe
	I0122 12:14:10.233538   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.233538   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.233538   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.233538   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"c10bf08a-e8a1-4355-8369-b54d7c5e6b68","resourceVersion":"294","creationTimestamp":"2024-01-22T12:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.177.163:8443","kubernetes.io/config.hash":"94caf781e9d5d6851ed3cfe41e8c6cd0","kubernetes.io/config.mirror":"94caf781e9d5d6851ed3cfe41e8c6cd0","kubernetes.io/config.seen":"2024-01-22T12:10:37.616243302Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0122 12:14:10.235735   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.235882   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.235920   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.235920   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.242191   11512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:14:10.242308   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.242341   11512 round_trippers.go:580]     Audit-Id: 0e2c4244-e6ea-4e24-a5d9-bfe397d6d80d
	I0122 12:14:10.242341   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.242429   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.242429   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.242429   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.242496   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.243390   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:10.243927   11512 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.243927   11512 pod_ready.go:81] duration metric: took 13.3314ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.243927   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.244255   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:14:10.244255   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.244255   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.244255   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.246955   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:14:10.246955   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.246955   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.246955   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.246955   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.246955   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.246955   11512 round_trippers.go:580]     Audit-Id: bd8d328e-4ebb-42ae-acb9-e14460a4329e
	I0122 12:14:10.246955   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.246955   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"327","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0122 12:14:10.248344   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.248530   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.248530   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.248530   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.250835   11512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:14:10.250835   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.250835   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.251299   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.251299   11512 round_trippers.go:580]     Audit-Id: 7379ddcc-a286-4139-97e2-638990b6fd06
	I0122 12:14:10.251358   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.251358   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.251358   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.251552   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:10.252320   11512 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.252408   11512 pod_ready.go:81] duration metric: took 8.2937ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.252408   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.392047   11512 request.go:629] Waited for 139.3483ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:14:10.392282   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:14:10.392282   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.392407   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.392407   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.400006   11512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:14:10.400006   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.400006   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.400006   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.400006   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.400006   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.400006   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.400006   11512 round_trippers.go:580]     Audit-Id: 56e58141-b167-436d-ac91-89233e774384
	I0122 12:14:10.400790   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"402","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0122 12:14:10.595764   11512 request.go:629] Waited for 194.7974ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.595898   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:10.595898   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.595898   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.595898   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.601742   11512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:14:10.601742   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.601742   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.601742   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.601742   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.601742   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.601742   11512 round_trippers.go:580]     Audit-Id: 0a89885e-5b5f-47f1-8dc4-602bd16001d2
	I0122 12:14:10.601742   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.602441   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:10.602441   11512 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.602972   11512 pod_ready.go:81] duration metric: took 350.5628ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.602972   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.798531   11512 request.go:629] Waited for 195.3707ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:14:10.798817   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:14:10.798817   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.798979   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.798979   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.802876   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:10.802930   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.802930   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.802930   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.802930   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.802930   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.802930   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.802930   11512 round_trippers.go:580]     Audit-Id: dfd0ea15-38dd-4f7e-ba69-ade03ea83d80
	I0122 12:14:10.802930   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"612","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0122 12:14:10.987383   11512 request.go:629] Waited for 183.2004ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:10.987568   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:14:10.987568   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:10.987668   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:10.987668   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:10.991381   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:10.992039   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:10.992039   11512 round_trippers.go:580]     Audit-Id: 6e16c5b3-ce2f-4df3-bc46-6700ad1f4d13
	I0122 12:14:10.992039   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:10.992039   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:10.992039   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:10.992039   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:10.992039   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:10 GMT
	I0122 12:14:10.992367   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"629","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_13_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I0122 12:14:10.992877   11512 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:10.992877   11512 pod_ready.go:81] duration metric: took 389.9037ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:10.992877   11512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:11.190282   11512 request.go:629] Waited for 196.955ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:14:11.190282   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:14:11.190282   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:11.190282   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:11.190282   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:11.195147   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:11.195210   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:11.195210   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:11 GMT
	I0122 12:14:11.195210   11512 round_trippers.go:580]     Audit-Id: 7a19f62b-e886-4563-921c-35a4957b6521
	I0122 12:14:11.195210   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:11.195274   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:11.195274   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:11.195298   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:11.195696   11512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"330","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0122 12:14:11.392513   11512 request.go:629] Waited for 196.0605ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:11.392688   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes/multinode-484200
	I0122 12:14:11.392688   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:11.392688   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:11.392688   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:11.397165   11512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:14:11.397435   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:11.397435   11512 round_trippers.go:580]     Audit-Id: 20492261-10b1-4741-acdb-61f5ad0814a7
	I0122 12:14:11.397435   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:11.397435   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:11.397435   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:11.397435   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:11.397435   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:11 GMT
	I0122 12:14:11.397860   11512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0122 12:14:11.398642   11512 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:14:11.398642   11512 pod_ready.go:81] duration metric: took 405.6729ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:14:11.398642   11512 pod_ready.go:38] duration metric: took 1.2059678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:14:11.398642   11512 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:14:11.411353   11512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:14:11.433067   11512 system_svc.go:56] duration metric: took 34.424ms WaitForService to wait for kubelet.
	I0122 12:14:11.433182   11512 kubeadm.go:581] duration metric: took 17.7879208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:14:11.433182   11512 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:14:11.595846   11512 request.go:629] Waited for 162.4182ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.177.163:8443/api/v1/nodes
	I0122 12:14:11.595846   11512 round_trippers.go:463] GET https://172.23.177.163:8443/api/v1/nodes
	I0122 12:14:11.595846   11512 round_trippers.go:469] Request Headers:
	I0122 12:14:11.595846   11512 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:14:11.595846   11512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:14:11.599442   11512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:14:11.600154   11512 round_trippers.go:577] Response Headers:
	I0122 12:14:11.600154   11512 round_trippers.go:580]     Audit-Id: b52f2583-e0aa-438d-8d8d-db7f5419e282
	I0122 12:14:11.600154   11512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:14:11.600154   11512 round_trippers.go:580]     Content-Type: application/json
	I0122 12:14:11.600154   11512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:14:11.600154   11512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:14:11.600154   11512 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:14:11 GMT
	I0122 12:14:11.600154   11512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"447","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9260 chars]
	I0122 12:14:11.601155   11512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:14:11.601266   11512 node_conditions.go:123] node cpu capacity is 2
	I0122 12:14:11.601266   11512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:14:11.601266   11512 node_conditions.go:123] node cpu capacity is 2
	I0122 12:14:11.601266   11512 node_conditions.go:105] duration metric: took 168.0827ms to run NodePressure ...
	I0122 12:14:11.601339   11512 start.go:228] waiting for startup goroutines ...
	I0122 12:14:11.601397   11512 start.go:242] writing updated cluster config ...
	I0122 12:14:11.615700   11512 ssh_runner.go:195] Run: rm -f paused
	I0122 12:14:11.768067   11512 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0122 12:14:11.769762   11512 out.go:177] * Done! kubectl is now configured to use "multinode-484200" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 12:08:52 UTC, ends at Mon 2024-01-22 12:15:27 UTC. --
	Jan 22 12:11:12 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:12.834537438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:12 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:12.840556573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:11:12 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:12.840732474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:12 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:12.840833775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:11:12 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:12.840945875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:13 multinode-484200 cri-dockerd[1206]: time="2024-01-22T12:11:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19c7fcee88a714d24d50e5fd27bfea10c3135628601e2f8f6e644572f76bae22/resolv.conf as [nameserver 172.23.176.1]"
	Jan 22 12:11:13 multinode-484200 cri-dockerd[1206]: time="2024-01-22T12:11:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e46fc8eaac50826b1600516de4520e842a611088421d62eb9f49e3f2b3aca0a2/resolv.conf as [nameserver 172.23.176.1]"
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.530067257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.530160458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.530209058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.530225558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.640577270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.640645871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.640668671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:11:13 multinode-484200 dockerd[1321]: time="2024-01-22T12:11:13.640774471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:14:37 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:37.058922460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:14:37 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:37.059029361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:14:37 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:37.059069661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:14:37 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:37.059088461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:14:37 multinode-484200 cri-dockerd[1206]: time="2024-01-22T12:14:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/adbe8caf1deccaf1b5d55c4c737eedcd4401138f684987425d219761c21aa69a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 22 12:14:38 multinode-484200 cri-dockerd[1206]: time="2024-01-22T12:14:38Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 22 12:14:38 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:38.741257358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:14:38 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:38.741344459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:14:38 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:38.741377559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:14:38 multinode-484200 dockerd[1321]: time="2024-01-22T12:14:38.741396359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c414da76cc7d1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   adbe8caf1decc       busybox-5b5d89c9d6-flgvf
	f56e8f9de45ec       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   e46fc8eaac508       coredns-5dd5756b68-g9wk9
	f4ffc78cf0f34       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   19c7fcee88a71       storage-provisioner
	e55da8df34e58       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Running             kindnet-cni               0                   e27d0a036e725       kindnet-f9bw4
	3f4ebe9f43eab       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   c1a37f7be8afd       kube-proxy-m94rg
	eae98e092eb29       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   861530d246aca       kube-scheduler-multinode-484200
	487e8e2356938       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   a8d60a34b9ea2       etcd-multinode-484200
	dca0a89d970d6       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   24388c3440dd3       kube-controller-manager-multinode-484200
	c38b8024525ee       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   37e2e82b58013       kube-apiserver-multinode-484200
	
	
	==> coredns [f56e8f9de45e] <==
	[INFO] 10.244.0.3:43271 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000964303s
	[INFO] 10.244.1.2:40023 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227101s
	[INFO] 10.244.1.2:41421 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000811s
	[INFO] 10.244.1.2:34776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184201s
	[INFO] 10.244.1.2:51063 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000779s
	[INFO] 10.244.1.2:36577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001729s
	[INFO] 10.244.1.2:46879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052901s
	[INFO] 10.244.1.2:51529 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000665s
	[INFO] 10.244.1.2:51244 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145301s
	[INFO] 10.244.0.3:38031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190001s
	[INFO] 10.244.0.3:60417 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171501s
	[INFO] 10.244.0.3:59648 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001408s
	[INFO] 10.244.0.3:33828 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146001s
	[INFO] 10.244.1.2:33852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124901s
	[INFO] 10.244.1.2:59520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074101s
	[INFO] 10.244.1.2:60914 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000894s
	[INFO] 10.244.1.2:40296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049901s
	[INFO] 10.244.0.3:35445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001051s
	[INFO] 10.244.0.3:47826 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000766502s
	[INFO] 10.244.0.3:33032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001394s
	[INFO] 10.244.0.3:37524 - 5 "PTR IN 1.176.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107s
	[INFO] 10.244.1.2:47681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001184s
	[INFO] 10.244.1.2:53042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105s
	[INFO] 10.244.1.2:34838 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144601s
	[INFO] 10.244.1.2:55904 - 5 "PTR IN 1.176.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000817s
	
	
	==> describe nodes <==
	Name:               multinode-484200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=multinode-484200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_22T12_10_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 12:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 12:15:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 12:14:52 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 12:14:52 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 12:14:52 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 12:14:52 +0000   Mon, 22 Jan 2024 12:11:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.177.163
	  Hostname:    multinode-484200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2cefb2a579e43ea8607fadb55e497e6
	  System UUID:                e9e83410-9a00-1d40-8577-0dce0e816b57
	  Boot ID:                    1053fa19-de8e-48ab-bb37-aa7bcdcd080d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-flgvf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-5dd5756b68-g9wk9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m29s
	  kube-system                 etcd-multinode-484200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kindnet-f9bw4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m29s
	  kube-system                 kube-apiserver-multinode-484200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-multinode-484200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-proxy-m94rg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-scheduler-multinode-484200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node multinode-484200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node multinode-484200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node multinode-484200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node multinode-484200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node multinode-484200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node multinode-484200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m29s                  node-controller  Node multinode-484200 event: Registered Node multinode-484200 in Controller
	  Normal  NodeReady                4m16s                  kubelet          Node multinode-484200 status is now: NodeReady
	
	
	Name:               multinode-484200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=multinode-484200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_22T12_13_53_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 12:13:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 12:15:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 12:14:54 +0000   Mon, 22 Jan 2024 12:13:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 12:14:54 +0000   Mon, 22 Jan 2024 12:13:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 12:14:54 +0000   Mon, 22 Jan 2024 12:13:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 12:14:54 +0000   Mon, 22 Jan 2024 12:14:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.185.74
	  Hostname:    multinode-484200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e37eb1fd57b4ca08770715bbd5c90ba
	  System UUID:                5daa307e-d17a-a94a-b576-0caf5f5dd761
	  Boot ID:                    f5504593-322d-4091-839f-63e1011721f5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-r46hz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-xqpt7               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      96s
	  kube-system                 kube-proxy-nsjc6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s (x5 over 97s)  kubelet          Node multinode-484200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x5 over 97s)  kubelet          Node multinode-484200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x5 over 97s)  kubelet          Node multinode-484200-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s                node-controller  Node multinode-484200-m02 event: Registered Node multinode-484200-m02 in Controller
	  Normal  NodeReady                78s                kubelet          Node multinode-484200-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.041699] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.303258] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.269525] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.688256] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 12:09] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.137217] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[Jan22 12:10] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.562155] systemd-fstab-generator[978]: Ignoring "noauto" for root device
	[  +0.160930] systemd-fstab-generator[989]: Ignoring "noauto" for root device
	[  +0.186954] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +1.344507] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.409538] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.168966] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.171512] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +0.229043] systemd-fstab-generator[1198]: Ignoring "noauto" for root device
	[ +11.614120] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
	[  +2.787752] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.797528] systemd-fstab-generator[1683]: Ignoring "noauto" for root device
	[  +0.573507] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.790017] systemd-fstab-generator[2611]: Ignoring "noauto" for root device
	[Jan22 12:11] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [487e8e235693] <==
	{"level":"info","ts":"2024-01-22T12:10:40.451857Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-22T12:10:40.453497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-22T12:10:40.457077Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-22T12:10:40.453046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec switched to configuration voters=(2554119081614201324)"}
	{"level":"info","ts":"2024-01-22T12:10:40.458274Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"62c40a13fe136c89","local-member-id":"23720dc5bda951ec","added-peer-id":"23720dc5bda951ec","added-peer-peer-urls":["https://172.23.177.163:2380"]}
	{"level":"info","ts":"2024-01-22T12:10:40.470741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-22T12:10:40.470953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-22T12:10:40.471223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec received MsgPreVoteResp from 23720dc5bda951ec at term 1"}
	{"level":"info","ts":"2024-01-22T12:10:40.471444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became candidate at term 2"}
	{"level":"info","ts":"2024-01-22T12:10:40.471602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec received MsgVoteResp from 23720dc5bda951ec at term 2"}
	{"level":"info","ts":"2024-01-22T12:10:40.471771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became leader at term 2"}
	{"level":"info","ts":"2024-01-22T12:10:40.471941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 23720dc5bda951ec elected leader 23720dc5bda951ec at term 2"}
	{"level":"info","ts":"2024-01-22T12:10:40.476818Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-22T12:10:40.480882Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"23720dc5bda951ec","local-member-attributes":"{Name:multinode-484200 ClientURLs:[https://172.23.177.163:2379]}","request-path":"/0/members/23720dc5bda951ec/attributes","cluster-id":"62c40a13fe136c89","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-22T12:10:40.481273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-22T12:10:40.481865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-22T12:10:40.493757Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-22T12:10:40.494037Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-22T12:10:40.488875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.177.163:2379"}
	{"level":"info","ts":"2024-01-22T12:10:40.494641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62c40a13fe136c89","local-member-id":"23720dc5bda951ec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-22T12:10:40.505397Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-22T12:10:40.496293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-22T12:10:40.508776Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-01-22T12:14:01.295075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.822344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-484200-m02\" ","response":"range_response_count:1 size:2961"}
	{"level":"info","ts":"2024-01-22T12:14:01.295232Z","caller":"traceutil/trace.go:171","msg":"trace[2004078274] range","detail":"{range_begin:/registry/minions/multinode-484200-m02; range_end:; response_count:1; response_revision:608; }","duration":"102.991345ms","start":"2024-01-22T12:14:01.192223Z","end":"2024-01-22T12:14:01.295214Z","steps":["trace[2004078274] 'range keys from in-memory index tree'  (duration: 102.718543ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:15:28 up 6 min,  0 users,  load average: 0.27, 0.49, 0.27
	Linux multinode-484200 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [e55da8df34e5] <==
	I0122 12:14:21.774565       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:14:31.789134       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:14:31.789306       1 main.go:227] handling current node
	I0122 12:14:31.789323       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:14:31.789333       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:14:41.795868       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:14:41.795973       1 main.go:227] handling current node
	I0122 12:14:41.795989       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:14:41.795998       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:14:51.805364       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:14:51.805476       1 main.go:227] handling current node
	I0122 12:14:51.805492       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:14:51.805500       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:15:01.821467       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:15:01.821564       1 main.go:227] handling current node
	I0122 12:15:01.821597       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:15:01.821608       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:15:11.831035       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:15:11.831134       1 main.go:227] handling current node
	I0122 12:15:11.831151       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:15:11.831162       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:15:21.846970       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:15:21.847014       1 main.go:227] handling current node
	I0122 12:15:21.847027       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:15:21.847034       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c38b8024525e] <==
	I0122 12:10:43.247149       1 controller.go:624] quota admission added evaluator for: namespaces
	I0122 12:10:43.248469       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0122 12:10:43.248810       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0122 12:10:43.253439       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0122 12:10:43.253747       1 aggregator.go:166] initial CRD sync complete...
	I0122 12:10:43.253886       1 autoregister_controller.go:141] Starting autoregister controller
	I0122 12:10:43.253960       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0122 12:10:43.253992       1 cache.go:39] Caches are synced for autoregister controller
	I0122 12:10:43.291655       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0122 12:10:44.056831       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0122 12:10:44.064900       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0122 12:10:44.064916       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0122 12:10:44.827396       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0122 12:10:44.887014       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0122 12:10:44.989917       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0122 12:10:44.999175       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.23.177.163]
	I0122 12:10:45.000872       1 controller.go:624] quota admission added evaluator for: endpoints
	I0122 12:10:45.007273       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0122 12:10:45.200183       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0122 12:10:46.641399       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0122 12:10:46.657229       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0122 12:10:46.668060       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0122 12:10:59.183180       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0122 12:10:59.833869       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0122 12:14:42.367197       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 172.23.177.163:45798->172.23.177.163:10250: write: connection reset by peer
	
	
	==> kube-controller-manager [dca0a89d970d] <==
	I0122 12:11:00.355630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.682052ms"
	I0122 12:11:00.357302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.001µs"
	I0122 12:11:12.152217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.7µs"
	I0122 12:11:12.177863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.6µs"
	I0122 12:11:14.086798       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0122 12:11:14.500970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.014893ms"
	I0122 12:11:14.501571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="223.802µs"
	I0122 12:13:52.554464       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484200-m02\" does not exist"
	I0122 12:13:52.574233       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nsjc6"
	I0122 12:13:52.585162       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484200-m02" podCIDRs=["10.244.1.0/24"]
	I0122 12:13:52.585241       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xqpt7"
	I0122 12:13:54.123358       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-484200-m02"
	I0122 12:13:54.123423       1 event.go:307] "Event occurred" object="multinode-484200-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-484200-m02 event: Registered Node multinode-484200-m02 in Controller"
	I0122 12:14:10.180294       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:14:36.557099       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0122 12:14:36.574776       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-r46hz"
	I0122 12:14:36.592807       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-flgvf"
	I0122 12:14:36.609843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.44839ms"
	I0122 12:14:36.657840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.66787ms"
	I0122 12:14:36.676563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.353765ms"
	I0122 12:14:36.676927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.2µs"
	I0122 12:14:38.897381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.924528ms"
	I0122 12:14:38.897940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="79.101µs"
	I0122 12:14:39.697878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.230546ms"
	I0122 12:14:39.697940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.3µs"
	
	
	==> kube-proxy [3f4ebe9f43ea] <==
	I0122 12:11:01.288123       1 server_others.go:69] "Using iptables proxy"
	I0122 12:11:01.302839       1 node.go:141] Successfully retrieved node IP: 172.23.177.163
	I0122 12:11:01.345402       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0122 12:11:01.345469       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 12:11:01.349832       1 server_others.go:152] "Using iptables Proxier"
	I0122 12:11:01.350084       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0122 12:11:01.350520       1 server.go:846] "Version info" version="v1.28.4"
	I0122 12:11:01.350558       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 12:11:01.352054       1 config.go:188] "Starting service config controller"
	I0122 12:11:01.352099       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0122 12:11:01.352423       1 config.go:97] "Starting endpoint slice config controller"
	I0122 12:11:01.352457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0122 12:11:01.355014       1 config.go:315] "Starting node config controller"
	I0122 12:11:01.355026       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0122 12:11:01.452781       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0122 12:11:01.452832       1 shared_informer.go:318] Caches are synced for service config
	I0122 12:11:01.455329       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [eae98e092eb2] <==
	E0122 12:10:43.233319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 12:10:43.233345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0122 12:10:44.053132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 12:10:44.053181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0122 12:10:44.075736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.075762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.093151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 12:10:44.093232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0122 12:10:44.126909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0122 12:10:44.127018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0122 12:10:44.181040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.181090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.185425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 12:10:44.185464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0122 12:10:44.361095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.361261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.371930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0122 12:10:44.372034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0122 12:10:44.381618       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 12:10:44.381898       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0122 12:10:44.483844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.484162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.488731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 12:10:44.489013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0122 12:10:47.631321       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 12:08:52 UTC, ends at Mon 2024-01-22 12:15:28 UTC. --
	Jan 22 12:11:12 multinode-484200 kubelet[2639]: I0122 12:11:12.230077    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klls8\" (UniqueName: \"kubernetes.io/projected/30187bc8-2b16-4ca8-944b-c71bfb25e36b-kube-api-access-klls8\") pod \"storage-provisioner\" (UID: \"30187bc8-2b16-4ca8-944b-c71bfb25e36b\") " pod="kube-system/storage-provisioner"
	Jan 22 12:11:12 multinode-484200 kubelet[2639]: I0122 12:11:12.230105    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmzd\" (UniqueName: \"kubernetes.io/projected/d5b665f2-ead9-43b8-8bf6-73c2edf81f8f-kube-api-access-9pmzd\") pod \"coredns-5dd5756b68-g9wk9\" (UID: \"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f\") " pod="kube-system/coredns-5dd5756b68-g9wk9"
	Jan 22 12:11:13 multinode-484200 kubelet[2639]: I0122 12:11:13.384071    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c7fcee88a714d24d50e5fd27bfea10c3135628601e2f8f6e644572f76bae22"
	Jan 22 12:11:13 multinode-484200 kubelet[2639]: I0122 12:11:13.424188    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46fc8eaac50826b1600516de4520e842a611088421d62eb9f49e3f2b3aca0a2"
	Jan 22 12:11:14 multinode-484200 kubelet[2639]: I0122 12:11:14.460973    2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.460849661 podCreationTimestamp="2024-01-22 12:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-22 12:11:14.460515359 +0000 UTC m=+27.862898604" watchObservedRunningTime="2024-01-22 12:11:14.460849661 +0000 UTC m=+27.863232806"
	Jan 22 12:11:46 multinode-484200 kubelet[2639]: E0122 12:11:46.989147    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:11:46 multinode-484200 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:11:46 multinode-484200 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:11:46 multinode-484200 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:12:46 multinode-484200 kubelet[2639]: E0122 12:12:46.989801    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:12:46 multinode-484200 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:12:46 multinode-484200 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:12:46 multinode-484200 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:13:46 multinode-484200 kubelet[2639]: E0122 12:13:46.989796    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:13:46 multinode-484200 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:13:46 multinode-484200 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:13:46 multinode-484200 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:14:36 multinode-484200 kubelet[2639]: I0122 12:14:36.608625    2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g9wk9" podStartSLOduration=217.60857436 podCreationTimestamp="2024-01-22 12:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-22 12:11:14.483961381 +0000 UTC m=+27.886344626" watchObservedRunningTime="2024-01-22 12:14:36.60857436 +0000 UTC m=+230.010957605"
	Jan 22 12:14:36 multinode-484200 kubelet[2639]: I0122 12:14:36.609459    2639 topology_manager.go:215] "Topology Admit Handler" podUID="194449f5-9dd4-4b81-9311-6f95d9a29787" podNamespace="default" podName="busybox-5b5d89c9d6-flgvf"
	Jan 22 12:14:36 multinode-484200 kubelet[2639]: I0122 12:14:36.672384    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wldg\" (UniqueName: \"kubernetes.io/projected/194449f5-9dd4-4b81-9311-6f95d9a29787-kube-api-access-8wldg\") pod \"busybox-5b5d89c9d6-flgvf\" (UID: \"194449f5-9dd4-4b81-9311-6f95d9a29787\") " pod="default/busybox-5b5d89c9d6-flgvf"
	Jan 22 12:14:37 multinode-484200 kubelet[2639]: I0122 12:14:37.597602    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbe8caf1deccaf1b5d55c4c737eedcd4401138f684987425d219761c21aa69a"
	Jan 22 12:14:46 multinode-484200 kubelet[2639]: E0122 12:14:46.989539    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:14:46 multinode-484200 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:14:46 multinode-484200 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:14:46 multinode-484200 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:15:19.955076    9740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-484200 -n multinode-484200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-484200 -n multinode-484200: (12.2073589s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-484200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (539.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-484200
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-484200
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-484200: (1m21.7072507s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-484200 --wait=true -v=8 --alsologtostderr
E0122 12:33:23.969630    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:33:51.171092    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-484200 --wait=true -v=8 --alsologtostderr: exit status 1 (6m58.7011063s)

                                                
                                                
-- stdout --
	* [multinode-484200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node multinode-484200 in cluster multinode-484200
	* Restarting existing hyperv VM for "multinode-484200" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-484200-m02 in cluster multinode-484200
	* Restarting existing hyperv VM for "multinode-484200-m02" ...
	* Found network options:
	  - NO_PROXY=172.23.191.211
	  - NO_PROXY=172.23.191.211
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - env NO_PROXY=172.23.191.211
	* Verifying Kubernetes components...
	* Starting worker node multinode-484200-m03 in cluster multinode-484200
	* Restarting existing hyperv VM for "multinode-484200-m03" ...
	* Found network options:
	  - NO_PROXY=172.23.191.211,172.23.187.210
	  - NO_PROXY=172.23.191.211,172.23.187.210
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - env NO_PROXY=172.23.191.211
	  - env NO_PROXY=172.23.191.211,172.23.187.210
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:30:50.289731    9928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0122 12:30:50.363371    9928 out.go:296] Setting OutFile to fd 1004 ...
	I0122 12:30:50.364157    9928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:30:50.364157    9928 out.go:309] Setting ErrFile to fd 692...
	I0122 12:30:50.364157    9928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:30:50.385941    9928 out.go:303] Setting JSON to false
	I0122 12:30:50.388673    9928 start.go:128] hostinfo: {"hostname":"minikube7","uptime":46019,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 12:30:50.388673    9928 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 12:30:50.390824    9928 out.go:177] * [multinode-484200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 12:30:50.391190    9928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:30:50.391190    9928 notify.go:220] Checking for updates...
	I0122 12:30:50.392543    9928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 12:30:50.393181    9928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 12:30:50.394178    9928 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 12:30:50.394537    9928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 12:30:50.396604    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:30:50.396934    9928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 12:30:55.836587    9928 out.go:177] * Using the hyperv driver based on existing profile
	I0122 12:30:55.837465    9928 start.go:298] selected driver: hyperv
	I0122 12:30:55.837465    9928 start.go:902] validating driver "hyperv" against &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false in
accel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:30:55.837802    9928 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 12:30:55.894326    9928 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 12:30:55.894326    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:30:55.894326    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:30:55.894326    9928 start_flags.go:321] config:
	{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:30:55.894994    9928 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:30:55.899241    9928 out.go:177] * Starting control plane node multinode-484200 in cluster multinode-484200
	I0122 12:30:55.899883    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:30:55.900599    9928 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 12:30:55.900599    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:30:55.900599    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:30:55.900599    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:30:55.900599    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:30:55.902986    9928 start.go:365] acquiring machines lock for multinode-484200: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:30:55.902986    9928 start.go:369] acquired machines lock for "multinode-484200" in 0s
	I0122 12:30:55.902986    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:30:55.903987    9928 fix.go:54] fixHost starting: 
	I0122 12:30:55.903987    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:30:58.640896    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:30:58.641066    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:30:58.641066    9928 fix.go:102] recreateIfNeeded on multinode-484200: state=Stopped err=<nil>
	W0122 12:30:58.641066    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:30:58.642536    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200" ...
	I0122 12:30:58.643346    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200
	I0122 12:31:01.658248    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:01.658248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:01.658248    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:31:01.658348    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:03.933413    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:03.933473    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:03.933627    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:06.494397    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:06.494641    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:07.509457    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:09.854565    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:09.854664    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:09.854664    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:12.397605    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:12.397781    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:13.402359    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:15.569055    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:15.569088    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:15.569190    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:18.081049    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:18.081286    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:19.088792    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:21.294223    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:21.294257    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:21.294320    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:23.835479    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:23.835757    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:24.837048    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:27.081003    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:27.081095    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:27.081095    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:29.660428    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:29.660726    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:29.663822    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:34.305456    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:34.305456    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:34.305551    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:31:34.307671    9928 machine.go:88] provisioning docker machine ...
	I0122 12:31:34.308309    9928 buildroot.go:166] provisioning hostname "multinode-484200"
	I0122 12:31:34.308457    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:36.408442    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:36.408654    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:36.408654    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:38.951759    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:38.951759    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:38.957547    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:31:38.958217    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:31:38.958217    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200 && echo "multinode-484200" | sudo tee /etc/hostname
	I0122 12:31:39.120498    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200
	
	I0122 12:31:39.120574    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:41.244792    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:41.245004    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:41.245004    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:43.816084    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:43.816084    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:43.822313    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:31:43.823097    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:31:43.823140    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:31:43.977757    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:31:43.977757    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:31:43.977886    9928 buildroot.go:174] setting up certificates
	I0122 12:31:43.977937    9928 provision.go:83] configureAuth start
	I0122 12:31:43.978023    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:46.094490    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:46.094490    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:46.094594    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:48.593389    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:48.593493    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:48.593493    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:50.686618    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:50.686820    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:50.686900    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:53.242868    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:53.242982    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:53.243131    9928 provision.go:138] copyHostCerts
	I0122 12:31:53.243162    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:31:53.243162    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:31:53.243162    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:31:53.243883    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:31:53.245059    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:31:53.245213    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:31:53.245213    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:31:53.245734    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:31:53.246657    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:31:53.247232    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:31:53.247232    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:31:53.247232    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:31:53.248717    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200 san=[172.23.191.211 172.23.191.211 localhost 127.0.0.1 minikube multinode-484200]
	I0122 12:31:53.745769    9928 provision.go:172] copyRemoteCerts
	I0122 12:31:53.758638    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:31:53.758638    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:58.487083    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:58.487248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:58.487854    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:31:58.595693    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8369812s)
	I0122 12:31:58.595743    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:31:58.595831    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:31:58.635752    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:31:58.636266    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0122 12:31:58.671363    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:31:58.671701    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:31:58.710168    9928 provision.go:86] duration metric: configureAuth took 14.7321668s
	I0122 12:31:58.710272    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:31:58.711251    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:31:58.711422    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:00.827121    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:00.827121    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:00.827277    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:03.335985    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:03.335985    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:03.346408    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:03.347254    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:03.347254    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:32:03.486169    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:32:03.486255    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:32:03.486439    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:32:03.486530    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:05.582752    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:05.582752    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:05.582840    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:08.078475    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:08.078475    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:08.084683    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:08.085261    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:08.085382    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:32:08.246410    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:32:08.246510    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:10.367238    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:10.367238    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:10.368109    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:12.914682    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:12.914778    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:12.920203    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:12.921107    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:12.921107    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:32:14.205600    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:32:14.206171    9928 machine.go:91] provisioned docker machine in 39.8976494s
	I0122 12:32:14.206223    9928 start.go:300] post-start starting for "multinode-484200" (driver="hyperv")
	I0122 12:32:14.206264    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:32:14.219611    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:32:14.219611    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:16.348251    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:16.348335    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:16.348335    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:18.828959    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:18.828959    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:18.829487    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:18.940429    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7207028s)
	I0122 12:32:18.953716    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:32:18.961304    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:32:18.961682    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:32:18.961682    9928 command_runner.go:130] > ID=buildroot
	I0122 12:32:18.961682    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:32:18.961682    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:32:18.961682    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:32:18.961682    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:32:18.961682    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:32:18.963618    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:32:18.963704    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:32:18.976102    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:32:18.991306    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:32:19.034395    9928 start.go:303] post-start completed in 4.8281502s
	I0122 12:32:19.034480    9928 fix.go:56] fixHost completed within 1m23.1301272s
	I0122 12:32:19.034546    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:23.724428    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:23.724741    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:23.729801    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:23.730488    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:23.730488    9928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 12:32:23.873737    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705926743.873947323
	
	I0122 12:32:23.873816    9928 fix.go:206] guest clock: 1705926743.873947323
	I0122 12:32:23.873816    9928 fix.go:219] Guest: 2024-01-22 12:32:23.873947323 +0000 UTC Remote: 2024-01-22 12:32:19.0344806 +0000 UTC m=+88.847123001 (delta=4.839466723s)
	I0122 12:32:23.873816    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:28.607995    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:28.608099    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:28.612977    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:28.613559    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:28.613559    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705926743
	I0122 12:32:28.761089    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:32:23 UTC 2024
	
	I0122 12:32:28.761089    9928 fix.go:226] clock set: Mon Jan 22 12:32:23 UTC 2024
	 (err=<nil>)
	I0122 12:32:28.761089    9928 start.go:83] releasing machines lock for "multinode-484200", held for 1m32.8576944s
	I0122 12:32:28.761723    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:30.891913    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:30.891913    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:30.892147    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:33.455695    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:33.455751    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:33.459949    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:32:33.460072    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:33.471279    9928 ssh_runner.go:195] Run: cat /version.json
	I0122 12:32:33.472283    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:35.718451    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:35.718451    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:35.718589    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:38.274392    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:38.274918    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:38.275596    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:38.337377    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:38.337377    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:38.337595    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:38.373718    9928 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 12:32:38.373797    9928 ssh_runner.go:235] Completed: cat /version.json: (4.9024179s)
	I0122 12:32:38.386644    9928 ssh_runner.go:195] Run: systemctl --version
	I0122 12:32:38.496750    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:32:38.496850    9928 command_runner.go:130] > systemd 247 (247)
	I0122 12:32:38.496850    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0367792s)
	I0122 12:32:38.496850    9928 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 12:32:38.508640    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:32:38.516348    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 12:32:38.516708    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:32:38.529327    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:32:38.551405    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:32:38.551405    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:32:38.551405    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:32:38.551405    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:32:38.579202    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:32:38.596414    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:32:38.624477    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:32:38.638904    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:32:38.652404    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:32:38.682796    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:32:38.711915    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:32:38.740222    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:32:38.769950    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:32:38.800527    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:32:38.831862    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:32:38.845724    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:32:38.861368    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:32:38.889186    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:39.060498    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:32:39.085356    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:32:39.099850    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:32:39.118111    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:32:39.118111    9928 command_runner.go:130] > [Unit]
	I0122 12:32:39.118111    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:32:39.118111    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:32:39.118111    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:32:39.118111    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:32:39.118111    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:32:39.118111    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:32:39.118111    9928 command_runner.go:130] > [Service]
	I0122 12:32:39.118111    9928 command_runner.go:130] > Type=notify
	I0122 12:32:39.118111    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:32:39.118111    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:32:39.118111    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:32:39.118111    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:32:39.118111    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:32:39.118111    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:32:39.118111    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:32:39.118111    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecStart=
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:32:39.118111    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:32:39.118703    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:32:39.118703    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:32:39.118703    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:32:39.118703    9928 command_runner.go:130] > Delegate=yes
	I0122 12:32:39.118703    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:32:39.118703    9928 command_runner.go:130] > KillMode=process
	I0122 12:32:39.118703    9928 command_runner.go:130] > [Install]
	I0122 12:32:39.118795    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:32:39.131859    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:32:39.166801    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:32:39.203144    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:32:39.241528    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:32:39.273996    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:32:39.331664    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:32:39.352019    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:32:39.380712    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:32:39.393057    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:32:39.397991    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:32:39.409994    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:32:39.426535    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:32:39.467361    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:32:39.646736    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:32:39.810936    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:32:39.810936    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:32:39.854316    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:40.032260    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:32:41.607672    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.575406s)
	I0122 12:32:41.622485    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:32:41.656582    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:32:41.689867    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:32:41.876944    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:32:42.041710    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:42.245311    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:32:42.286146    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:32:42.319102    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:42.493569    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:32:42.596438    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:32:42.610266    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:32:42.617888    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:32:42.617888    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:32:42.617888    9928 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I0122 12:32:42.617888    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:32:42.617888    9928 command_runner.go:130] > Access: 2024-01-22 12:32:42.513179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] > Modify: 2024-01-22 12:32:42.513179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] > Change: 2024-01-22 12:32:42.517179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] >  Birth: -
	I0122 12:32:42.617888    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:32:42.631135    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:32:42.636541    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:32:42.649195    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:32:42.720075    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:32:42.720210    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:32:42.730390    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:32:42.761283    9928 command_runner.go:130] > 24.0.7
	I0122 12:32:42.771189    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:32:42.801106    9928 command_runner.go:130] > 24.0.7
	I0122 12:32:42.804650    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:32:42.804861    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:32:42.812891    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:32:42.812891    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:32:42.824480    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:32:42.829528    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:32:42.848040    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:32:42.858397    9928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0122 12:32:42.884305    9928 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:32:42.884305    9928 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0122 12:32:42.884305    9928 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0122 12:32:42.884305    9928 docker.go:615] Images already preloaded, skipping extraction
	I0122 12:32:42.894358    9928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0122 12:32:42.920076    9928 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0122 12:32:42.920123    9928 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:32:42.920123    9928 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0122 12:32:42.920153    9928 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0122 12:32:42.920153    9928 cache_images.go:84] Images are preloaded, skipping loading
	I0122 12:32:42.930240    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:32:42.966208    9928 command_runner.go:130] > cgroupfs
	I0122 12:32:42.966208    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:32:42.966857    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:32:42.966857    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:32:42.966962    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.191.211 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.191.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:32:42.967273    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.191.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200"
	  kubeletExtraArgs:
	    node-ip: 172.23.191.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:32:42.967484    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.191.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:32:42.979510    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:32:42.994300    9928 command_runner.go:130] > kubeadm
	I0122 12:32:42.995139    9928 command_runner.go:130] > kubectl
	I0122 12:32:42.995139    9928 command_runner.go:130] > kubelet
	I0122 12:32:42.995139    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:32:43.008601    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 12:32:43.022600    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0122 12:32:43.050493    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:32:43.081359    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0122 12:32:43.126366    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:32:43.131127    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:32:43.147630    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.191.211
	I0122 12:32:43.147711    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.147878    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:32:43.148843    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:32:43.149575    9928 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.key
	I0122 12:32:43.149575    9928 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578
	I0122 12:32:43.149575    9928 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 with IP's: [172.23.191.211 10.96.0.1 127.0.0.1 10.0.0.1]
	I0122 12:32:43.287292    9928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 ...
	I0122 12:32:43.287292    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578: {Name:mk2e4e9cce5f09968f0188a3b017f4358d8ae6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.288544    9928 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578 ...
	I0122 12:32:43.288544    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578: {Name:mk5521ea78846a551397996cd3f44504deac2f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.289618    9928 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt
	I0122 12:32:43.297690    9928 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key
	I0122 12:32:43.300399    9928 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:32:43.302260    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:32:43.302363    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:32:43.302972    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:32:43.303108    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:32:43.303980    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.305234    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0122 12:32:43.344883    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 12:32:43.382052    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 12:32:43.418619    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 12:32:43.463223    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:32:43.504577    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:32:43.547257    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:32:43.588533    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:32:43.629240    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:32:43.669867    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:32:43.710172    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:32:43.749572    9928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 12:32:43.793197    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:32:43.800870    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:32:43.813518    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:32:43.841865    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.848221    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.848553    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.861268    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.869982    9928 command_runner.go:130] > b5213941
	I0122 12:32:43.884405    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:32:43.910888    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:32:43.937815    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.943545    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.943610    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.956220    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.963283    9928 command_runner.go:130] > 51391683
	I0122 12:32:43.976321    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:32:44.007160    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:32:44.036400    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.042273    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.042369    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.053020    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.062451    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:32:44.074727    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:32:44.102721    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:32:44.109755    9928 command_runner.go:130] > ca.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > ca.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > healthcheck-client.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > healthcheck-client.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > peer.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > peer.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > server.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > server.key
	I0122 12:32:44.122622    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 12:32:44.130188    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.142862    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 12:32:44.152249    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.166098    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 12:32:44.174209    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.187619    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 12:32:44.196728    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.212163    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 12:32:44.221229    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.234784    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 12:32:44.243601    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.243879    9928 kubeadm.go:404] StartCluster: {Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:32:44.254088    9928 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 12:32:44.290868    9928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/minikube/etcd:
	I0122 12:32:44.309792    9928 command_runner.go:130] > member
	I0122 12:32:44.310055    9928 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0122 12:32:44.310174    9928 kubeadm.go:636] restartCluster start
	I0122 12:32:44.328830    9928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 12:32:44.344376    9928 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 12:32:44.345388    9928 kubeconfig.go:135] verify returned: extract IP: "multinode-484200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:44.345769    9928 kubeconfig.go:146] "multinode-484200" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0122 12:32:44.346367    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:44.358248    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:44.359298    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:32:44.361048    9928 cert_rotation.go:137] Starting client certificate rotation controller
	I0122 12:32:44.375418    9928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 12:32:44.393044    9928 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.393123    9928 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0122 12:32:44.393123    9928 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0122 12:32:44.393172    9928 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0122 12:32:44.393172    9928 command_runner.go:130] >  kind: InitConfiguration
	I0122 12:32:44.393172    9928 command_runner.go:130] >  localAPIEndpoint:
	I0122 12:32:44.393172    9928 command_runner.go:130] > -  advertiseAddress: 172.23.177.163
	I0122 12:32:44.393172    9928 command_runner.go:130] > +  advertiseAddress: 172.23.191.211
	I0122 12:32:44.393172    9928 command_runner.go:130] >    bindPort: 8443
	I0122 12:32:44.393172    9928 command_runner.go:130] >  bootstrapTokens:
	I0122 12:32:44.393329    9928 command_runner.go:130] >    - groups:
	I0122 12:32:44.393329    9928 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0122 12:32:44.393329    9928 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0122 12:32:44.393329    9928 command_runner.go:130] >    name: "multinode-484200"
	I0122 12:32:44.393329    9928 command_runner.go:130] >    kubeletExtraArgs:
	I0122 12:32:44.393410    9928 command_runner.go:130] > -    node-ip: 172.23.177.163
	I0122 12:32:44.393410    9928 command_runner.go:130] > +    node-ip: 172.23.191.211
	I0122 12:32:44.393410    9928 command_runner.go:130] >    taints: []
	I0122 12:32:44.393410    9928 command_runner.go:130] >  ---
	I0122 12:32:44.393410    9928 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0122 12:32:44.393471    9928 command_runner.go:130] >  kind: ClusterConfiguration
	I0122 12:32:44.393496    9928 command_runner.go:130] >  apiServer:
	I0122 12:32:44.393496    9928 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	I0122 12:32:44.393525    9928 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	I0122 12:32:44.393525    9928 command_runner.go:130] >    extraArgs:
	I0122 12:32:44.393525    9928 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0122 12:32:44.393525    9928 command_runner.go:130] >  controllerManager:
	I0122 12:32:44.393525    9928 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.177.163
	+  advertiseAddress: 172.23.191.211
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-484200"
	   kubeletExtraArgs:
	-    node-ip: 172.23.177.163
	+    node-ip: 172.23.191.211
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0122 12:32:44.393525    9928 kubeadm.go:1135] stopping kube-system containers ...
	I0122 12:32:44.403231    9928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 12:32:44.437239    9928 command_runner.go:130] > f56e8f9de45e
	I0122 12:32:44.437568    9928 command_runner.go:130] > f4ffc78cf0f3
	I0122 12:32:44.437568    9928 command_runner.go:130] > 19c7fcee88a7
	I0122 12:32:44.437568    9928 command_runner.go:130] > e46fc8eaac50
	I0122 12:32:44.437568    9928 command_runner.go:130] > e55da8df34e5
	I0122 12:32:44.437617    9928 command_runner.go:130] > 3f4ebe9f43ea
	I0122 12:32:44.437617    9928 command_runner.go:130] > c1a37f7be8af
	I0122 12:32:44.437617    9928 command_runner.go:130] > e27d0a036e72
	I0122 12:32:44.437617    9928 command_runner.go:130] > eae98e092eb2
	I0122 12:32:44.437649    9928 command_runner.go:130] > 487e8e235693
	I0122 12:32:44.437649    9928 command_runner.go:130] > dca0a89d970d
	I0122 12:32:44.437649    9928 command_runner.go:130] > c38b8024525e
	I0122 12:32:44.437699    9928 command_runner.go:130] > 861530d246ac
	I0122 12:32:44.437699    9928 command_runner.go:130] > 24388c3440dd
	I0122 12:32:44.437699    9928 command_runner.go:130] > 37e2e82b5801
	I0122 12:32:44.437699    9928 command_runner.go:130] > a8d60a34b9ea
	I0122 12:32:44.438170    9928 docker.go:483] Stopping containers: [f56e8f9de45e f4ffc78cf0f3 19c7fcee88a7 e46fc8eaac50 e55da8df34e5 3f4ebe9f43ea c1a37f7be8af e27d0a036e72 eae98e092eb2 487e8e235693 dca0a89d970d c38b8024525e 861530d246ac 24388c3440dd 37e2e82b5801 a8d60a34b9ea]
	I0122 12:32:44.448738    9928 ssh_runner.go:195] Run: docker stop f56e8f9de45e f4ffc78cf0f3 19c7fcee88a7 e46fc8eaac50 e55da8df34e5 3f4ebe9f43ea c1a37f7be8af e27d0a036e72 eae98e092eb2 487e8e235693 dca0a89d970d c38b8024525e 861530d246ac 24388c3440dd 37e2e82b5801 a8d60a34b9ea
	I0122 12:32:44.478400    9928 command_runner.go:130] > f56e8f9de45e
	I0122 12:32:44.478470    9928 command_runner.go:130] > f4ffc78cf0f3
	I0122 12:32:44.478470    9928 command_runner.go:130] > 19c7fcee88a7
	I0122 12:32:44.478470    9928 command_runner.go:130] > e46fc8eaac50
	I0122 12:32:44.478470    9928 command_runner.go:130] > e55da8df34e5
	I0122 12:32:44.478470    9928 command_runner.go:130] > 3f4ebe9f43ea
	I0122 12:32:44.478470    9928 command_runner.go:130] > c1a37f7be8af
	I0122 12:32:44.478470    9928 command_runner.go:130] > e27d0a036e72
	I0122 12:32:44.478470    9928 command_runner.go:130] > eae98e092eb2
	I0122 12:32:44.478470    9928 command_runner.go:130] > 487e8e235693
	I0122 12:32:44.478470    9928 command_runner.go:130] > dca0a89d970d
	I0122 12:32:44.478470    9928 command_runner.go:130] > c38b8024525e
	I0122 12:32:44.478470    9928 command_runner.go:130] > 861530d246ac
	I0122 12:32:44.478470    9928 command_runner.go:130] > 24388c3440dd
	I0122 12:32:44.478470    9928 command_runner.go:130] > 37e2e82b5801
	I0122 12:32:44.478470    9928 command_runner.go:130] > a8d60a34b9ea
	I0122 12:32:44.491224    9928 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 12:32:44.527819    9928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 12:32:44.540318    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0122 12:32:44.540318    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0122 12:32:44.541200    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0122 12:32:44.541354    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:32:44.541445    9928 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:32:44.553447    9928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.567507    9928 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.567507    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:44.929452    9928 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 12:32:44.930173    9928 command_runner.go:130] > [certs] Using the existing "sa" key
	I0122 12:32:44.930250    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.036426    9928 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 12:32:46.036426    9928 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 12:32:46.037061    9928 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 12:32:46.037061    9928 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 12:32:46.037121    9928 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 12:32:46.037121    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1068653s)
	I0122 12:32:46.037178    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.299211    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:32:46.299260    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:32:46.299260    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:32:46.299260    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.391717    9928 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 12:32:46.391891    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.475624    9928 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 12:32:46.475753    9928 api_server.go:52] waiting for apiserver process to appear ...
	I0122 12:32:46.494768    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:46.994064    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:47.504125    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:47.992281    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:48.501725    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:48.998788    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:49.061531    9928 command_runner.go:130] > 1830
	I0122 12:32:49.061531    9928 api_server.go:72] duration metric: took 2.585767s to wait for apiserver process to appear ...
	I0122 12:32:49.061531    9928 api_server.go:88] waiting for apiserver healthz status ...
	I0122 12:32:49.061531    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.390808    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 12:32:53.390808    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 12:32:53.391821    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.433345    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 12:32:53.433486    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 12:32:53.564896    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.578558    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:53.578558    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:54.072034    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:54.090023    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:54.090023    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:54.566845    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:54.582616    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:54.582616    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:55.077016    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:55.088814    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 200:
	ok
	I0122 12:32:55.089597    9928 round_trippers.go:463] GET https://172.23.191.211:8443/version
	I0122 12:32:55.089597    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:55.089597    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:55.089597    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:55.102927    9928 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0122 12:32:55.102927    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:55.102927    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:55.102927    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Content-Length: 264
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:55 GMT
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Audit-Id: e09fdf80-8995-4398-8c74-4646a50e7309
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:55.102927    9928 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0122 12:32:55.102927    9928 api_server.go:141] control plane version: v1.28.4
	I0122 12:32:55.102927    9928 api_server.go:131] duration metric: took 6.0413692s to wait for apiserver health ...
	I0122 12:32:55.102927    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:32:55.102927    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:32:55.103932    9928 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0122 12:32:55.116926    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:32:55.126574    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:32:55.126657    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:32:55.126657    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:32:55.126657    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:32:55.126657    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:32:55.126657    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:32:55.126657    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:32:55.126775    9928 command_runner.go:130] >  Birth: -
	I0122 12:32:55.126840    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:32:55.126887    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:32:55.179088    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:32:57.126888    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:32:57.128421    9928 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.9493247s)
	I0122 12:32:57.128570    9928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 12:32:57.128623    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:32:57.128623    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.128623    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.128623    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.134192    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:57.134192    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Audit-Id: 5c2ddcc9-e068-4f9d-92a2-1fc3c736beb1
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.134192    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.134192    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.134893    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.136389    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84179 chars]
	I0122 12:32:57.142709    9928 system_pods.go:59] 12 kube-system pods found
	I0122 12:32:57.143235    9928 system_pods.go:61] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 12:32:57.143287    9928 system_pods.go:61] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0122 12:32:57.143395    9928 system_pods.go:61] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:32:57.143395    9928 system_pods.go:61] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:32:57.143437    9928 system_pods.go:61] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 12:32:57.143464    9928 system_pods.go:61] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0122 12:32:57.143464    9928 system_pods.go:74] duration metric: took 14.8403ms to wait for pod list to return data ...
	I0122 12:32:57.143464    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:32:57.143529    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:32:57.143529    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.143529    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.143529    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.147141    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.147141    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.147141    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.147141    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.147141    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.148131    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.148131    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.148131    9928 round_trippers.go:580]     Audit-Id: fffe29c1-7781-4c33-ae6a-25fc47eb98b2
	I0122 12:32:57.148131    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14859 chars]
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:105] duration metric: took 5.7256ms to run NodePressure ...
	I0122 12:32:57.149189    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:57.483493    9928 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0122 12:32:57.483563    9928 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0122 12:32:57.483719    9928 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0122 12:32:57.483800    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0122 12:32:57.483800    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.483800    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.484028    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.489230    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.489230    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Audit-Id: fc6d313b-8e86-403f-bc86-9c57430354a3
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.489230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.489230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.489230    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1710","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0122 12:32:57.491141    9928 kubeadm.go:787] kubelet initialised
	I0122 12:32:57.491141    9928 kubeadm.go:788] duration metric: took 7.4224ms waiting for restarted kubelet to initialise ...
	I0122 12:32:57.491141    9928 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:32:57.491141    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:32:57.491141    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.491141    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.491141    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.499217    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:32:57.499217    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Audit-Id: c28dc28a-f0c8-48a5-ba74-4b0d894ff390
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.499217    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.499303    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.499303    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.500878    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84179 chars]
	I0122 12:32:57.504211    9928 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.504799    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:32:57.504873    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.504873    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.504873    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.510364    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:57.510364    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.510364    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.510984    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.510984    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.510984    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.511078    9928 round_trippers.go:580]     Audit-Id: cf97ab7e-8a8f-4207-bb20-b81c433b289c
	I0122 12:32:57.511105    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.511105    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:32:57.511838    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.511882    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.511882    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.511882    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.514408    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.514408    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.515293    9928 round_trippers.go:580]     Audit-Id: 3ef2f7fd-07e0-4803-a40c-c5e4cd9ef057
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.515316    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.515316    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.515673    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.518287    9928 pod_ready.go:97] node "multinode-484200" hosting pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.518287    9928 pod_ready.go:81] duration metric: took 14.0758ms waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.518287    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.518287    9928 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.518287    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:32:57.518287    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.518287    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.518287    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.520286    9928 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0122 12:32:57.520286    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.520286    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.521136    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.521136    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.521136    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.521136    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.521317    9928 round_trippers.go:580]     Audit-Id: 4b0e9c99-2fed-4725-b64f-99a0e1b74caf
	I0122 12:32:57.521493    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1710","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0122 12:32:57.521954    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.521954    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.521954    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.521954    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.524278    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.524278    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.524278    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.524278    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.525286    9928 round_trippers.go:580]     Audit-Id: fc6a76f0-4529-4301-99d7-a58f72c928ae
	I0122 12:32:57.525286    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.525286    9928 pod_ready.go:97] node "multinode-484200" hosting pod "etcd-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.525286    9928 pod_ready.go:81] duration metric: took 6.9997ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.525286    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "etcd-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.525286    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.525286    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:32:57.525286    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.525286    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.525286    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.529278    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.530277    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Audit-Id: 610f64d3-0767-429a-8a16-b47ae46a97c1
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.530322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.530322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.530322    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1711","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0122 12:32:57.531274    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.531274    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.531341    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.531341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.533916    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.533916    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.533916    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.533916    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Audit-Id: ecdca817-00e0-44e6-93cd-d9d42d7397cc
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.534755    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.535268    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-apiserver-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.535268    9928 pod_ready.go:81] duration metric: took 9.9812ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.535268    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-apiserver-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.535268    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.535268    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:32:57.535268    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.535268    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.535268    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.539663    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.539719    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.539719    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.539719    9928 round_trippers.go:580]     Audit-Id: 6ca054d8-e630-479b-876d-f4174f6c7148
	I0122 12:32:57.539778    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.539791    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.539791    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.539791    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.540799    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1712","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0122 12:32:57.540908    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.540908    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.540908    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.540908    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.544123    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.544123    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.544123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Audit-Id: 6c4c38eb-d93d-4025-a0e3-e091d8f4c98b
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.544123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.545110    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.545110    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-controller-manager-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.545110    9928 pod_ready.go:81] duration metric: took 9.8423ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.545110    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-controller-manager-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.545110    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.743428    9928 request.go:629] Waited for 198.0538ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:32:57.743428    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:32:57.743428    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.743428    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.743428    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.748015    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.748015    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.748015    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.748015    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.748477    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.748477    9928 round_trippers.go:580]     Audit-Id: fd54bf24-3f89-4fe1-993b-153c00cf6dc6
	I0122 12:32:57.748519    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.748519    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.748725    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1714","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0122 12:32:57.932511    9928 request.go:629] Waited for 183.0372ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.932783    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.932783    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.932783    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.932783    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.939759    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:32:57.939759    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.939759    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.939759    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Audit-Id: 6fc61b84-923d-43be-b75c-40b80af3799d
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.941239    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.941239    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-proxy-m94rg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.941239    9928 pod_ready.go:81] duration metric: took 396.1268ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.941771    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-proxy-m94rg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.941771    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.136514    9928 request.go:629] Waited for 194.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:32:58.136834    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:32:58.136834    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.136834    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.136922    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.140572    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:58.140572    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.140572    9928 round_trippers.go:580]     Audit-Id: aa28fa26-7e02-4987-afe5-c415a1915a5f
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.141317    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.141317    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.141956    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1585","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:32:58.340398    9928 request.go:629] Waited for 197.351ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:32:58.340789    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:32:58.340821    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.340821    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.340821    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.346086    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:58.346086    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Audit-Id: 71ea3118-aeab-4291-bf79-fd62caf1a5f3
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.346370    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.346370    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.346691    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1605","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3637 chars]
	I0122 12:32:58.347384    9928 pod_ready.go:92] pod "kube-proxy-mxd9x" in "kube-system" namespace has status "Ready":"True"
	I0122 12:32:58.347384    9928 pod_ready.go:81] duration metric: took 405.6106ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.347470    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.530451    9928 request.go:629] Waited for 182.6505ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:32:58.530771    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:32:58.530845    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.530845    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.530926    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.534623    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:58.534623    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.535438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Audit-Id: 3d799f44-036a-48ee-9840-fd496952751e
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.535520    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.535839    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"612","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0122 12:32:58.736348    9928 request.go:629] Waited for 199.5591ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:32:58.736518    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:32:58.736518    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.736518    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.736518    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.741034    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:58.741034    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.742072    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.742102    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.742102    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Audit-Id: 97b09077-5884-4312-a170-c60143a7df89
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.742276    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"1577","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0122 12:32:58.743128    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:32:58.743128    9928 pod_ready.go:81] duration metric: took 395.6566ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.743128    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.940955    9928 request.go:629] Waited for 197.4011ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:32:58.941143    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:32:58.941143    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.941143    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.941143    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.948247    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:32:58.948332    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.948332    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.948393    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.948442    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Audit-Id: bf6964d6-0e70-4a94-9309-85088a57d80c
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.948442    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1713","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0122 12:32:59.130879    9928 request.go:629] Waited for 181.0422ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.131379    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.131379    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.131379    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.131498    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.136635    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:59.136635    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.136777    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.136777    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.136834    9928 round_trippers.go:580]     Audit-Id: 5b39e856-5221-45ae-b22c-becbfed55ee4
	I0122 12:32:59.137067    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:59.137187    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-scheduler-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:59.137187    9928 pod_ready.go:81] duration metric: took 394.0571ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:59.137187    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-scheduler-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:59.137187    9928 pod_ready.go:38] duration metric: took 1.6460383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:32:59.137711    9928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 12:32:59.155663    9928 command_runner.go:130] > -16
	I0122 12:32:59.155865    9928 ops.go:34] apiserver oom_adj: -16
	I0122 12:32:59.155865    9928 kubeadm.go:640] restartCluster took 14.8455325s
	I0122 12:32:59.155903    9928 kubeadm.go:406] StartCluster complete in 14.9119591s
	I0122 12:32:59.155903    9928 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:59.156247    9928 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:59.157626    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:59.158468    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 12:32:59.159111    9928 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0122 12:32:59.160075    9928 out.go:177] * Enabled addons: 
	I0122 12:32:59.159974    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:32:59.160876    9928 addons.go:505] enable addons completed in 1.7649ms: enabled=[]
	I0122 12:32:59.171873    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:59.172927    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:32:59.174871    9928 cert_rotation.go:137] Starting client certificate rotation controller
	I0122 12:32:59.174871    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:32:59.174871    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.174871    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.174871    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.188321    9928 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0122 12:32:59.188960    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Audit-Id: 72d0b8e4-ee81-4038-b8cf-b109864038a5
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.188960    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.189071    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.189092    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1735","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:32:59.189212    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:32:59.189212    9928 start.go:223] Will wait 6m0s for node &{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:32:59.190316    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:32:59.203797    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:32:59.396292    9928 command_runner.go:130] > apiVersion: v1
	I0122 12:32:59.396379    9928 command_runner.go:130] > data:
	I0122 12:32:59.396379    9928 command_runner.go:130] >   Corefile: |
	I0122 12:32:59.396379    9928 command_runner.go:130] >     .:53 {
	I0122 12:32:59.396379    9928 command_runner.go:130] >         log
	I0122 12:32:59.396379    9928 command_runner.go:130] >         errors
	I0122 12:32:59.396379    9928 command_runner.go:130] >         health {
	I0122 12:32:59.396379    9928 command_runner.go:130] >            lameduck 5s
	I0122 12:32:59.396458    9928 command_runner.go:130] >         }
	I0122 12:32:59.396458    9928 command_runner.go:130] >         ready
	I0122 12:32:59.396490    9928 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0122 12:32:59.396490    9928 command_runner.go:130] >            pods insecure
	I0122 12:32:59.396490    9928 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0122 12:32:59.396521    9928 command_runner.go:130] >            ttl 30
	I0122 12:32:59.396521    9928 command_runner.go:130] >         }
	I0122 12:32:59.396521    9928 command_runner.go:130] >         prometheus :9153
	I0122 12:32:59.396589    9928 command_runner.go:130] >         hosts {
	I0122 12:32:59.396652    9928 command_runner.go:130] >            172.23.176.1 host.minikube.internal
	I0122 12:32:59.396676    9928 command_runner.go:130] >            fallthrough
	I0122 12:32:59.396676    9928 command_runner.go:130] >         }
	I0122 12:32:59.396676    9928 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0122 12:32:59.396705    9928 command_runner.go:130] >            max_concurrent 1000
	I0122 12:32:59.396705    9928 command_runner.go:130] >         }
	I0122 12:32:59.396705    9928 command_runner.go:130] >         cache 30
	I0122 12:32:59.396705    9928 command_runner.go:130] >         loop
	I0122 12:32:59.396705    9928 command_runner.go:130] >         reload
	I0122 12:32:59.396705    9928 command_runner.go:130] >         loadbalance
	I0122 12:32:59.396705    9928 command_runner.go:130] >     }
	I0122 12:32:59.396705    9928 command_runner.go:130] > kind: ConfigMap
	I0122 12:32:59.396705    9928 command_runner.go:130] > metadata:
	I0122 12:32:59.396705    9928 command_runner.go:130] >   creationTimestamp: "2024-01-22T12:10:46Z"
	I0122 12:32:59.396705    9928 command_runner.go:130] >   name: coredns
	I0122 12:32:59.396705    9928 command_runner.go:130] >   namespace: kube-system
	I0122 12:32:59.396705    9928 command_runner.go:130] >   resourceVersion: "398"
	I0122 12:32:59.396705    9928 command_runner.go:130] >   uid: b9768c33-b82e-4fdb-8e5b-45028bdc3da3
	I0122 12:32:59.396705    9928 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0122 12:32:59.396705    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200" to be "Ready" ...
	I0122 12:32:59.397231    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.397270    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.397270    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.397270    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.401848    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:59.401848    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.401848    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.401914    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.401963    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Audit-Id: 99565fdb-b028-44db-b606-07cdb721f1db
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.402279    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:59.907644    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.907738    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.907826    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.907826    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.912398    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:59.912398    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Audit-Id: 803f8815-43ca-4487-b4ed-8a415a28bdad
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.912732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.912732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.912732    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.913360    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:00.412401    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:00.412491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:00.412491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:00.412491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:00.416866    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:00.416866    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:00.416866    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:00.416866    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:00 GMT
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Audit-Id: 1e3900df-764d-4eb8-887f-b7bfdf742c8c
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:00.418034    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:00.902871    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:00.902937    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:00.902937    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:00.902937    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:00.906712    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:00.907394    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:00.907394    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:00.907394    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:00 GMT
	I0122 12:33:00.907394    9928 round_trippers.go:580]     Audit-Id: fa2e302d-8084-4e71-a8b6-531798fc7a72
	I0122 12:33:00.907463    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:00.907463    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:00.907490    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:00.907634    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:01.405710    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:01.405806    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:01.405918    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:01.405918    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:01.409954    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:01.409954    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Audit-Id: 4333e119-5e20-4db3-ae9e-50fe1c3d26d1
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:01.410485    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:01.410485    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:01 GMT
	I0122 12:33:01.410804    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:01.411393    9928 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:33:01.913144    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:01.913244    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:01.913244    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:01.913244    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:01.917756    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:01.917756    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:01.918416    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:01.918416    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:01.918416    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:01.918416    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:01.918483    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:01 GMT
	I0122 12:33:01.918483    9928 round_trippers.go:580]     Audit-Id: c92cd6bd-5659-48f4-9fdf-5c5d2569ace2
	I0122 12:33:01.918980    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:02.399554    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.399640    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.399640    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.399640    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.403032    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.403032    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Audit-Id: 22912381-d38e-4427-92ae-aa251613612a
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.403427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.403427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.403427    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.403714    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:02.903012    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.903086    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.903086    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.903086    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.906921    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.906921    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.907639    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.907639    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Audit-Id: 9bcd172d-05a2-4908-a57b-5b0d1ca09750
	I0122 12:33:02.907639    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:02.908431    9928 node_ready.go:49] node "multinode-484200" has status "Ready":"True"
	I0122 12:33:02.908431    9928 node_ready.go:38] duration metric: took 3.51171s waiting for node "multinode-484200" to be "Ready" ...
	I0122 12:33:02.908514    9928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:33:02.908612    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:02.908714    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.908714    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.908714    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.913805    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:02.913805    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.913805    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.913805    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.914046    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.914046    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.914046    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.914046    9928 round_trippers.go:580]     Audit-Id: 73aca338-8ba7-4bf3-bd8a-f5e6d4d96c87
	I0122 12:33:02.917329    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1751"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83589 chars]
	I0122 12:33:02.921475    9928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:02.921703    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:02.921727    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.921727    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.921793    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.925606    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.926198    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.926279    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.926279    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.926279    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.926355    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.926383    9928 round_trippers.go:580]     Audit-Id: 97d58a5d-023f-4689-9e2b-606ddef97214
	I0122 12:33:02.926383    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.926622    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:02.927242    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.927303    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.927303    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.927373    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.930508    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.930604    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.930604    9928 round_trippers.go:580]     Audit-Id: 6d3336ab-84fb-40f2-a4ed-70d80d715733
	I0122 12:33:02.930604    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.930670    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.930670    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.930670    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.930670    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.930670    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:03.422552    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:03.422552    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.422552    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.422552    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.428157    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:03.428157    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Audit-Id: 45b79bb8-a77d-4642-a82f-33059d5c0f74
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.428157    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.428407    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.428407    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.428605    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:03.429642    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:03.429642    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.429642    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.429642    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.436167    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:03.436167    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Audit-Id: 7c33f8b8-7023-4e20-a6d5-9ae300e9c751
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.436167    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.436167    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.436167    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:03.922827    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:03.922903    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.922903    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.922903    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.927782    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:03.928208    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.928208    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.928208    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.928208    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.928208    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.928422    9928 round_trippers.go:580]     Audit-Id: c1ee8e10-c16b-41f0-8e3d-8055f2eb2cbd
	I0122 12:33:03.928465    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.928702    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:03.929590    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:03.929590    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.929663    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.929663    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.933056    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:03.933056    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.933056    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.933056    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.933056    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.933056    9928 round_trippers.go:580]     Audit-Id: 05e713e3-3b1e-4158-9ae4-ed149a1bab9d
	I0122 12:33:03.933452    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.933452    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.933741    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.437465    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:04.437544    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.437544    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.437606    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.441313    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.442158    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.442158    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Audit-Id: 06fb88c6-9c80-4de6-bdbb-86a825f637b6
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.442158    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.442509    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:04.443241    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:04.443241    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.443241    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.443241    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.445603    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:04.445603    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Audit-Id: 395cc31b-be74-4f69-9162-920ad6543ee4
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.446549    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.446549    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.446611    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.446611    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.936235    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:04.936235    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.936235    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.936235    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.939713    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.939713    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.940712    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.940712    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.940748    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.940748    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.940748    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.940748    9928 round_trippers.go:580]     Audit-Id: f26c1e48-d816-490a-a692-cfeb3e2f8059
	I0122 12:33:04.940852    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:04.941095    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:04.941687    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.941687    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.941687    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.944798    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.944798    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Audit-Id: 209edfa7-b032-4b39-b098-4e3077658a9e
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.944798    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.944798    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.944798    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.945762    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:05.424190    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:05.424190    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.424190    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.424190    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.428792    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:05.429188    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Audit-Id: 8716a6b4-f07a-49cc-8202-99a9ed9e7319
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.429188    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.429188    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.429188    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:05.430008    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:05.430008    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.430008    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.430008    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.433679    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:05.433679    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.433679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.433679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.433679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.433679    9928 round_trippers.go:580]     Audit-Id: 63a86ddc-b654-4126-bf88-38ad91076e1d
	I0122 12:33:05.434276    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.434276    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.434589    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:05.926131    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:05.926222    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.926222    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.926315    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.928914    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:05.928914    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.929338    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.929338    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Audit-Id: 2d58ba30-fc82-407f-92c8-5b0509ac5720
	I0122 12:33:05.929400    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:05.930272    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:05.930350    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.930350    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.930350    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.936114    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:05.936114    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Audit-Id: 7da4d310-fb3c-4a51-9239-56df5133e0d9
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.936594    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.936594    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.936665    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.936925    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:06.428134    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:06.428244    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.428315    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.428315    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.444548    9928 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0122 12:33:06.444614    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.444614    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Audit-Id: 5618bbfa-589b-4829-84d4-9bcc30e1e1e5
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.444684    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.444684    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.444917    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:06.445710    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:06.445791    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.445791    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.445791    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.450322    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:06.450322    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Audit-Id: f8cd9dfa-6812-44c7-9431-6736f208d64c
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.450322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.450322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.450322    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:06.930459    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:06.930459    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.930459    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.930459    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.935069    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:06.935286    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.935286    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.935286    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.935286    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.935371    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.935371    9928 round_trippers.go:580]     Audit-Id: 6612e434-549a-4cfe-ae2e-045143970d51
	I0122 12:33:06.935437    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.935647    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:06.936340    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:06.936340    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.936510    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.936510    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.939717    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:06.939717    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.939717    9928 round_trippers.go:580]     Audit-Id: 96e096ff-4619-46e7-af7d-afeddfe8e86f
	I0122 12:33:06.939717    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.940599    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.940599    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.940599    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.940599    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.940937    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:07.433548    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:07.433604    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.433604    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.433604    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.437039    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.437039    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.437039    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.437593    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.437593    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Audit-Id: f012c16f-4c03-444a-8eb3-24a551749c50
	I0122 12:33:07.437922    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:07.438661    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:07.438661    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.438661    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.438661    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.442000    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.442000    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.442000    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.442000    9928 round_trippers.go:580]     Audit-Id: d83e9538-b67a-42f1-9093-d6416eeed3df
	I0122 12:33:07.442536    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.442536    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.442536    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.442536    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.443187    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:07.443927    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:07.923519    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:07.923519    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.923519    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.923519    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.928166    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.928209    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Audit-Id: f5c1ffc6-b693-43d8-b72a-1e6bc6abe580
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.928209    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.928209    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.928209    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:07.929094    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:07.929094    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.929094    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.929094    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.932679    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.932679    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.932679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Audit-Id: 7e75b55b-c186-4e0a-9fea-7cd42911fce1
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.933095    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.933095    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.933410    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:08.435102    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:08.435177    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.435177    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.435249    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.439172    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:08.440155    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.440155    9928 round_trippers.go:580]     Audit-Id: cb24955d-1368-47aa-a4f2-0dee54185a98
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.440206    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.440206    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.440738    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:08.441205    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:08.441205    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.441205    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.441847    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.448790    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:08.448790    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.448790    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.448790    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Audit-Id: 30c15e5a-bbff-4af8-9a83-b50a5dd2bf66
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.449795    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:08.931904    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:08.931904    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.931904    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.931904    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.936629    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:08.936629    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Audit-Id: ca585ee3-47a8-4660-9531-84faa4f46e85
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.936629    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.936629    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.937278    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.937733    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:08.939077    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:08.939137    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.939137    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.939137    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.949182    9928 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0122 12:33:08.949182    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.949182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.949182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Audit-Id: f10a2950-0baf-4b87-b325-1ac8617f7a20
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.949182    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:09.435249    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:09.435249    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.435249    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.435249    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.439242    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.439242    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.439242    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.439242    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.439853    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Audit-Id: 3618903b-4992-42fc-b820-03ceebeebcd9
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.440059    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:09.440924    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:09.441012    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.441012    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.441012    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.444551    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.445013    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Audit-Id: 4f323315-a07a-4e05-8ae7-ff8ff8967ccb
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.445013    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.445013    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.445130    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.445444    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:09.446038    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:09.933255    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:09.933322    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.933322    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.933322    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.937528    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:09.937957    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.937957    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.937957    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Audit-Id: 2753e85b-fc8f-44b9-ac4f-62491dc51aa4
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.938371    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:09.939182    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:09.939182    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.939255    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.939255    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.942533    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.942772    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.942772    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.942772    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Audit-Id: 499bfef9-f01b-456e-b62f-d0ab9ccb72bc
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.943384    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:10.426064    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:10.426064    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.426064    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.426064    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.429651    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:10.430079    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.430079    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.430079    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.430079    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.430079    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.430192    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.430192    9928 round_trippers.go:580]     Audit-Id: 27f4d291-72f5-430d-9115-a84c04f855da
	I0122 12:33:10.430350    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:10.430865    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:10.430865    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.430865    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.430865    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.435575    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:10.435575    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Audit-Id: 33a1915b-3b87-493c-b077-082fa2493fc9
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.436182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.436182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.436215    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:10.932525    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:10.932525    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.932525    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.932525    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.935518    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:10.936508    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.936508    9928 round_trippers.go:580]     Audit-Id: 339235a4-5abc-430c-8134-a185abc5d40d
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.936576    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.936576    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.936825    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:10.937491    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:10.937491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.937491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.937491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.943517    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:10.943517    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Audit-Id: d0c0a248-8410-49cc-b80a-983071f61c89
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.944450    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.944450    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.944450    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.427349    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:11.427349    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.427349    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.427349    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.431903    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:11.432426    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Audit-Id: 37b4c276-7baa-4f9c-b51a-4d19228316ad
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.432426    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.432487    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.432487    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.432524    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:11.433690    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:11.433760    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.433760    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.433760    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.436571    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:11.436571    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Audit-Id: 963f3c30-a697-4f0a-a788-d89961ddf189
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.437133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.437133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.437597    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.927691    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:11.927781    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.927781    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.927781    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.931625    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:11.931971    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Audit-Id: 7e06334b-bf75-480c-a7e3-700922a3de89
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.931971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.931971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.931971    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:11.932822    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:11.932822    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.932822    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.932822    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.935429    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:11.935429    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Audit-Id: ae77a19f-64f7-4b02-a75b-0dc8d32183f1
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.935429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.935429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.936362    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.936436    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.937037    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:12.430569    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:12.430569    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.430681    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.430681    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.439647    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:33:12.439647    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Audit-Id: 006c810b-2858-42bd-adad-ac744bd38c72
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.439647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.439647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.441133    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0122 12:33:12.442114    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.442184    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.442184    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.442184    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.444952    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.445458    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Audit-Id: dabb8815-eb18-4304-9591-dc0d5b201a98
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.445458    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.445458    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.446599    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.446801    9928 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.446801    9928 pod_ready.go:81] duration metric: took 9.525257s waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.446801    9928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.446801    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:33:12.446801    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.446801    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.446801    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.449823    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.449823    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.449823    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.449823    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.450826    9928 round_trippers.go:580]     Audit-Id: 1c5720d3-9f85-4b64-8205-ffa2c95d65b9
	I0122 12:33:12.450826    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1771","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0122 12:33:12.450826    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.450826    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.450826    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.450826    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.454517    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.454517    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.454517    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.454517    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.455103    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Audit-Id: 4ea7dfb1-ba57-4f2a-bdfe-7c552e42f401
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.455407    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.455838    9928 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.455914    9928 pod_ready.go:81] duration metric: took 9.1137ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.455975    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.456058    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:33:12.456126    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.456126    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.456126    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.458906    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.458906    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.459089    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.459089    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Audit-Id: 235b0e25-e033-46bf-bfbc-47d2f9792aa3
	I0122 12:33:12.459089    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1757","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0122 12:33:12.459790    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.459859    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.459859    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.459859    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.463120    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.463120    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.463120    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.463237    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.463237    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Audit-Id: c4f67a00-72a3-4c87-82fe-25988974104d
	I0122 12:33:12.463517    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.463517    9928 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.463517    9928 pod_ready.go:81] duration metric: took 7.5422ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.463517    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.463517    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:33:12.463517    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.463517    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.463517    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.467071    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.468070    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.468115    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Audit-Id: b80a472d-0bda-4a9d-bd4b-dfcf8f417a11
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.468115    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.468257    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1769","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0122 12:33:12.469013    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.469013    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.469013    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.469013    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.471597    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.471597    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.471597    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.471597    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.471597    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.471597    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.472121    9928 round_trippers.go:580]     Audit-Id: f50825e3-b788-4fe3-9fc5-b6983dec9710
	I0122 12:33:12.472121    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.472440    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.472440    9928 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.472440    9928 pod_ready.go:81] duration metric: took 8.9234ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.472440    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.473037    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:33:12.473037    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.473134    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.473134    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.476107    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.476647    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Audit-Id: ab50380e-f3a7-49b9-8b10-69d303221ed9
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.476647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.476647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.477094    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1740","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0122 12:33:12.477356    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.477356    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.477356    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.477356    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.480924    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.480924    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.481122    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.481122    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Audit-Id: 7fcb4197-6822-4628-a160-369ce857c289
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.481476    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.481592    9928 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.481592    9928 pod_ready.go:81] duration metric: took 9.1513ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.481592    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.633924    9928 request.go:629] Waited for 152.3314ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:33:12.634212    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:33:12.634212    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.634212    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.634212    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.638034    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.638034    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.638034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.638034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Audit-Id: 37ef7d68-a9ab-4523-b76a-bbb440bf1c88
	I0122 12:33:12.638034    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1585","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:33:12.836071    9928 request.go:629] Waited for 196.8401ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:33:12.836331    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:33:12.836331    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.836331    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.836331    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.839667    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.839667    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.839667    9928 round_trippers.go:580]     Audit-Id: 24ac447d-c55a-4ef4-9a3b-1cd2910bf171
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.840655    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.840655    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.841089    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1605","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3637 chars]
	I0122 12:33:12.841160    9928 pod_ready.go:92] pod "kube-proxy-mxd9x" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.841160    9928 pod_ready.go:81] duration metric: took 359.5666ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.841160    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.042134    9928 request.go:629] Waited for 200.8355ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:33:13.042229    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:33:13.042421    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.042421    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.042489    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.045795    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.045795    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.045795    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.045795    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Audit-Id: 891003cf-8ca1-4e39-b452-d46282ed326b
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.046653    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.047105    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"612","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0122 12:33:13.244317    9928 request.go:629] Waited for 196.3278ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:33:13.244317    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:33:13.244317    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.244683    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.244683    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.251403    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:13.251403    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Audit-Id: 95379aaf-e690-47af-b6c6-9358b36c9a34
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.251403    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.251403    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.252617    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"1577","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0122 12:33:13.253234    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:13.253283    9928 pod_ready.go:81] duration metric: took 412.1211ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.253349    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.446200    9928 request.go:629] Waited for 192.7762ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:33:13.446491    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:33:13.446491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.446491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.446491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.450046    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.450046    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.450046    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Audit-Id: cbb8428e-a402-4184-9532-db1b6708f26e
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.450905    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.450905    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.451045    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1753","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0122 12:33:13.633888    9928 request.go:629] Waited for 181.5719ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:13.633888    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:13.634227    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.634227    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.634274    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.637897    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.638892    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.638892    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.638892    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Audit-Id: 38aa4773-e33d-48d4-9d4e-fc4be0f88674
	I0122 12:33:13.638892    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:13.639570    9928 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:13.639570    9928 pod_ready.go:81] duration metric: took 386.2196ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.639570    9928 pod_ready.go:38] duration metric: took 10.7310092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:33:13.639570    9928 api_server.go:52] waiting for apiserver process to appear ...
	I0122 12:33:13.653065    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:33:13.676124    9928 command_runner.go:130] > 1830
	I0122 12:33:13.676219    9928 api_server.go:72] duration metric: took 14.486944s to wait for apiserver process to appear ...
	I0122 12:33:13.676219    9928 api_server.go:88] waiting for apiserver healthz status ...
	I0122 12:33:13.676297    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:33:13.685278    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 200:
	ok
	I0122 12:33:13.686406    9928 round_trippers.go:463] GET https://172.23.191.211:8443/version
	I0122 12:33:13.686406    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.686406    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.686406    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.688768    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:13.688768    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Content-Length: 264
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Audit-Id: ea666616-b12e-45de-8dbf-ad7b6fb05f9b
	I0122 12:33:13.688969    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.688969    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.688969    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.688969    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.688969    9928 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0122 12:33:13.688969    9928 api_server.go:141] control plane version: v1.28.4
	I0122 12:33:13.689097    9928 api_server.go:131] duration metric: took 12.8772ms to wait for apiserver health ...
	I0122 12:33:13.689097    9928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 12:33:13.836456    9928 request.go:629] Waited for 147.2976ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:13.836836    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:13.836836    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.836886    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.836886    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.842526    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:13.842526    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.842526    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Audit-Id: a3bf42f7-ceb6-4609-9809-9471de7fe797
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.842526    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.846740    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82563 chars]
	I0122 12:33:13.850705    9928 system_pods.go:59] 12 kube-system pods found
	I0122 12:33:13.850705    9928 system_pods.go:61] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:33:13.850705    9928 system_pods.go:74] duration metric: took 161.5464ms to wait for pod list to return data ...
	I0122 12:33:13.850705    9928 default_sa.go:34] waiting for default service account to be created ...
	I0122 12:33:14.039770    9928 request.go:629] Waited for 189.0646ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:33:14.039770    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:33:14.039770    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.039770    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.039770    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.047971    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:33:14.047971    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Audit-Id: 99c36e4d-1f0d-465e-a254-a2c6d9256814
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.047971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.047971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Content-Length: 262
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.047971    9928 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4124c850-901f-4684-9b66-62ca4c3085b4","resourceVersion":"367","creationTimestamp":"2024-01-22T12:10:59Z"}}]}
	I0122 12:33:14.048961    9928 default_sa.go:45] found service account: "default"
	I0122 12:33:14.048961    9928 default_sa.go:55] duration metric: took 198.2559ms for default service account to be created ...
	I0122 12:33:14.048961    9928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0122 12:33:14.246032    9928 request.go:629] Waited for 196.8113ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:14.246313    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:14.246356    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.246356    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.246356    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.252841    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:14.252916    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.252977    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.252977    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Audit-Id: 76c96352-d504-429d-9af7-5865c4af3fc7
	I0122 12:33:14.254559    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82563 chars]
	I0122 12:33:14.258590    9928 system_pods.go:86] 12 kube-system pods found
	I0122 12:33:14.258590    9928 system_pods.go:89] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:33:14.258590    9928 system_pods.go:126] duration metric: took 209.6271ms to wait for k8s-apps to be running ...
	I0122 12:33:14.258590    9928 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:33:14.272790    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:33:14.292650    9928 system_svc.go:56] duration metric: took 34.0116ms WaitForService to wait for kubelet.
	I0122 12:33:14.292650    9928 kubeadm.go:581] duration metric: took 15.1033714s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:33:14.292650    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:33:14.433395    9928 request.go:629] Waited for 140.5281ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes
	I0122 12:33:14.433454    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:33:14.433454    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.433454    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.433454    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.438129    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:14.438816    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.438816    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.438816    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.438816    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.438816    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.438920    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.438920    9928 round_trippers.go:580]     Audit-Id: e950cdd5-6785-4a41-b04b-4dcb13235fd8
	I0122 12:33:14.438992    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14732 chars]
	I0122 12:33:14.440222    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440222    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440295    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440295    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:105] duration metric: took 147.6451ms to run NodePressure ...
	I0122 12:33:14.440295    9928 start.go:228] waiting for startup goroutines ...
	I0122 12:33:14.440373    9928 start.go:233] waiting for cluster config update ...
	I0122 12:33:14.440373    9928 start.go:242] writing updated cluster config ...
	I0122 12:33:14.452638    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:33:14.453603    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:14.458254    9928 out.go:177] * Starting worker node multinode-484200-m02 in cluster multinode-484200
	I0122 12:33:14.458254    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:33:14.458254    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:33:14.458254    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:33:14.459746    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:33:14.459954    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:14.462164    9928 start.go:365] acquiring machines lock for multinode-484200-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:33:14.462536    9928 start.go:369] acquired machines lock for "multinode-484200-m02" in 372.4µs
	I0122 12:33:14.462747    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:33:14.462747    9928 fix.go:54] fixHost starting: m02
	I0122 12:33:14.463011    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:16.582684    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:33:16.582755    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:16.582755    9928 fix.go:102] recreateIfNeeded on multinode-484200-m02: state=Stopped err=<nil>
	W0122 12:33:16.582755    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:33:16.584123    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200-m02" ...
	I0122 12:33:16.585162    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200-m02
	I0122 12:33:19.488456    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:19.488592    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:19.488627    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:33:19.488665    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:21.740472    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:21.740472    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:21.740558    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:24.286481    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:24.286729    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:25.289539    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:30.109002    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:30.109002    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:31.111517    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:35.915759    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:35.915798    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:36.917392    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:41.756864    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:41.756864    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:42.761637    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:45.038046    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:45.038046    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:45.038172    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:47.600755    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:47.601070    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:47.604070    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:49.769118    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:49.769357    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:49.769357    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:52.314596    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:52.314596    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:52.314823    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:52.317448    9928 machine.go:88] provisioning docker machine ...
	I0122 12:33:52.317448    9928 buildroot.go:166] provisioning hostname "multinode-484200-m02"
	I0122 12:33:52.317448    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:54.479151    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:54.479218    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:54.479218    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:57.057445    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:57.057610    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:57.062750    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:33:57.064168    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:33:57.064168    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200-m02 && echo "multinode-484200-m02" | sudo tee /etc/hostname
	I0122 12:33:57.228802    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200-m02
	
	I0122 12:33:57.228802    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:59.393183    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:59.393279    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:59.393279    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:01.968855    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:01.968855    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:01.977092    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:01.978356    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:01.978421    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:34:02.131227    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:34:02.131339    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:34:02.131339    9928 buildroot.go:174] setting up certificates
	I0122 12:34:02.131339    9928 provision.go:83] configureAuth start
	I0122 12:34:02.131339    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:06.829698    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:06.829985    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:06.830178    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:08.996613    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:08.996676    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:08.996820    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:11.634587    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:11.634815    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:11.634815    9928 provision.go:138] copyHostCerts
	I0122 12:34:11.635113    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:34:11.635436    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:34:11.635534    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:34:11.636100    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:34:11.637227    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:34:11.637431    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:34:11.637431    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:34:11.637431    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:34:11.639242    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:34:11.639543    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:34:11.639653    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:34:11.640201    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:34:11.641276    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200-m02 san=[172.23.187.210 172.23.187.210 localhost 127.0.0.1 minikube multinode-484200-m02]
	I0122 12:34:11.804762    9928 provision.go:172] copyRemoteCerts
	I0122 12:34:11.816358    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:34:11.816358    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:16.535379    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:16.535574    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:16.536130    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:16.644492    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8281139s)
	I0122 12:34:16.644492    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:34:16.645113    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:34:16.690124    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:34:16.690451    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0122 12:34:16.727351    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:34:16.727769    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:34:16.770488    9928 provision.go:86] duration metric: configureAuth took 14.6390856s
	I0122 12:34:16.770488    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:34:16.771255    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:34:16.771324    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:18.895138    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:18.895321    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:18.895415    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:21.450253    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:21.450459    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:21.456881    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:21.457571    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:21.457571    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:34:21.600469    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:34:21.600551    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:34:21.600752    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:34:21.600792    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:23.776909    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:23.776909    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:23.777003    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:26.380446    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:26.380657    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:26.386883    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:26.387511    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:26.387511    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.191.211"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:34:26.551691    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.191.211
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:34:26.552275    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:28.706607    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:28.706794    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:28.706794    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:31.309577    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:31.309577    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:31.314975    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:31.315794    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:31.315873    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:34:32.486837    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:34:32.486837    9928 machine.go:91] provisioned docker machine in 40.1692144s
	I0122 12:34:32.486837    9928 start.go:300] post-start starting for "multinode-484200-m02" (driver="hyperv")
	I0122 12:34:32.486837    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:34:32.499278    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:34:32.499278    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:34.631492    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:34.631492    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:34.631596    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:37.222269    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:37.222269    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:37.222878    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:37.338396    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8390972s)
	I0122 12:34:37.354863    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:34:37.361293    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:34:37.361293    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:34:37.361293    9928 command_runner.go:130] > ID=buildroot
	I0122 12:34:37.361293    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:34:37.361293    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:34:37.361293    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:34:37.361514    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:34:37.361949    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:34:37.363542    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:34:37.363542    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:34:37.376811    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:34:37.391587    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:34:37.433391    9928 start.go:303] post-start completed in 4.9465316s
	I0122 12:34:37.433468    9928 fix.go:56] fixHost completed within 1m22.9703581s
	I0122 12:34:37.433468    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:42.135186    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:42.135186    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:42.143984    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:42.145284    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:42.145284    9928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 12:34:42.281742    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705926882.281948644
	
	I0122 12:34:42.281742    9928 fix.go:206] guest clock: 1705926882.281948644
	I0122 12:34:42.281742    9928 fix.go:219] Guest: 2024-01-22 12:34:42.281948644 +0000 UTC Remote: 2024-01-22 12:34:37.4334685 +0000 UTC m=+227.245503601 (delta=4.848480144s)
	I0122 12:34:42.281895    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:46.980361    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:46.980361    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:46.986677    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:46.987357    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:46.987357    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705926882
	I0122 12:34:47.135337    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:34:42 UTC 2024
	
	I0122 12:34:47.135337    9928 fix.go:226] clock set: Mon Jan 22 12:34:42 UTC 2024
	 (err=<nil>)
	I0122 12:34:47.135337    9928 start.go:83] releasing machines lock for "multinode-484200-m02", held for 1m32.6723949s
	I0122 12:34:47.135929    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:49.288328    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:49.288629    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:49.288850    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:51.868114    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:51.868114    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:51.868557    9928 out.go:177] * Found network options:
	I0122 12:34:51.869517    9928 out.go:177]   - NO_PROXY=172.23.191.211
	W0122 12:34:51.870847    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:34:51.871593    9928 out.go:177]   - NO_PROXY=172.23.191.211
	W0122 12:34:51.872394    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:34:51.874107    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:34:51.875589    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:34:51.875589    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:51.891552    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:34:51.891552    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:54.094223    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:54.094423    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:54.094507    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:54.126045    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:54.126147    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:54.126147    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:56.773482    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:56.773482    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:56.774171    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:56.793245    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:56.793245    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:56.793799    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:56.874449    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0122 12:34:56.875563    9928 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.983817s)
	W0122 12:34:56.875563    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:34:56.889428    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:34:56.987405    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:34:56.988394    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:34:56.988439    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1127822s)
	I0122 12:34:56.988439    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:34:56.988534    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:34:56.988778    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:34:57.025463    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:34:57.040624    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:34:57.076176    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:34:57.095878    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:34:57.115082    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:34:57.146139    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:34:57.178095    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:34:57.209767    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:34:57.242105    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:34:57.272128    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:34:57.302134    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:34:57.321090    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:34:57.333617    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:34:57.362699    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:34:57.537738    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:34:57.568520    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:34:57.582669    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:34:57.604090    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:34:57.604181    9928 command_runner.go:130] > [Unit]
	I0122 12:34:57.604181    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:34:57.604181    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:34:57.604181    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:34:57.604181    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:34:57.604251    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:34:57.604251    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:34:57.604251    9928 command_runner.go:130] > [Service]
	I0122 12:34:57.604293    9928 command_runner.go:130] > Type=notify
	I0122 12:34:57.604293    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:34:57.604293    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211
	I0122 12:34:57.604293    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:34:57.604293    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:34:57.604293    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:34:57.604397    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:34:57.604397    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:34:57.604397    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:34:57.604397    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:34:57.604397    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:34:57.604510    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:34:57.604540    9928 command_runner.go:130] > ExecStart=
	I0122 12:34:57.604540    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:34:57.604578    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:34:57.604578    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:34:57.604578    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:34:57.604651    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:34:57.604651    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:34:57.604651    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:34:57.604651    9928 command_runner.go:130] > Delegate=yes
	I0122 12:34:57.604651    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:34:57.604651    9928 command_runner.go:130] > KillMode=process
	I0122 12:34:57.604651    9928 command_runner.go:130] > [Install]
	I0122 12:34:57.604651    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:34:57.617972    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:34:57.650383    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:34:57.690602    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:34:57.725651    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:34:57.759626    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:34:57.808632    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:34:57.831025    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:34:57.858981    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:34:57.875689    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:34:57.880413    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:34:57.899616    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:34:57.917486    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:34:57.956132    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:34:58.131694    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:34:58.279314    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:34:58.279428    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:34:58.322438    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:34:58.488165    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:35:00.061553    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.573381s)
	I0122 12:35:00.075733    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:35:00.112763    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:35:00.148178    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:35:00.327314    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:35:00.500201    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:35:00.666573    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:35:00.706470    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:35:00.740345    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:35:00.915381    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:35:01.027974    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:35:01.039978    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:35:01.048970    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:35:01.048970    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:35:01.048970    9928 command_runner.go:130] > Device: 16h/22d	Inode: 844         Links: 1
	I0122 12:35:01.048970    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:35:01.048970    9928 command_runner.go:130] > Access: 2024-01-22 12:35:00.941717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] > Modify: 2024-01-22 12:35:00.941717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] > Change: 2024-01-22 12:35:00.945717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] >  Birth: -
	I0122 12:35:01.048970    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:35:01.059970    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:35:01.066202    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:35:01.083934    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:35:01.158267    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:35:01.158267    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:35:01.167556    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:35:01.201344    9928 command_runner.go:130] > 24.0.7
	I0122 12:35:01.210706    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:35:01.247316    9928 command_runner.go:130] > 24.0.7
	I0122 12:35:01.249332    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:35:01.249332    9928 out.go:177]   - env NO_PROXY=172.23.191.211
	I0122 12:35:01.250310    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:35:01.257332    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:35:01.257332    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:35:01.270352    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:35:01.275635    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:35:01.294770    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.187.210
	I0122 12:35:01.294862    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:35:01.295148    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:35:01.295716    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:35:01.295882    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:35:01.296365    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:35:01.296480    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:35:01.296787    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:35:01.297365    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:35:01.297786    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:35:01.297919    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:35:01.298172    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:35:01.298551    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:35:01.298906    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:35:01.299438    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:35:01.299748    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.299859    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.300177    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.300936    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:35:01.339908    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:35:01.380054    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:35:01.415556    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:35:01.452418    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:35:01.493889    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:35:01.534081    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:35:01.588788    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:35:01.595896    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:35:01.610061    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:35:01.640344    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.647349    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.647349    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.659628    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.666680    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:35:01.677817    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:35:01.708764    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:35:01.738184    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.745096    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.745147    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.757364    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.764879    9928 command_runner.go:130] > b5213941
	I0122 12:35:01.777202    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:35:01.806212    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:35:01.836216    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.843018    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.843018    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.856594    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.864518    9928 command_runner.go:130] > 51391683
	I0122 12:35:01.880129    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:35:01.913220    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:35:01.918826    9928 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:35:01.919890    9928 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:35:01.931678    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:35:01.969244    9928 command_runner.go:130] > cgroupfs
	I0122 12:35:01.969244    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:35:01.969825    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:35:01.969874    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:35:01.969966    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.187.210 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.187.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:35:01.970158    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.187.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200-m02"
	  kubeletExtraArgs:
	    node-ip: 172.23.187.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:35:01.970296    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.187.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:35:01.982230    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:35:01.998282    9928 command_runner.go:130] > kubeadm
	I0122 12:35:01.998644    9928 command_runner.go:130] > kubectl
	I0122 12:35:01.998644    9928 command_runner.go:130] > kubelet
	I0122 12:35:01.998719    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:35:02.016417    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0122 12:35:02.031234    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0122 12:35:02.064479    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:35:02.107844    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:35:02.113428    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:35:02.145639    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:35:02.146608    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:35:02.146687    9928 start.go:304] JoinCluster: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:35:02.146912    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0122 12:35:02.146981    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:06.823564    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:35:06.823802    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:06.824382    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:35:07.026530    9928 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:35:07.026530    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8795968s)
	I0122 12:35:07.026530    9928 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:07.026530    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:35:07.039516    9928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0122 12:35:07.039516    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:35:09.205563    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:09.205563    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:09.205678    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:11.784579    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:35:11.784579    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:11.785308    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:35:11.966776    9928 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0122 12:35:12.066807    9928 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-xqpt7, kube-system/kube-proxy-nsjc6
	I0122 12:35:15.089723    9928 command_runner.go:130] > node/multinode-484200-m02 cordoned
	I0122 12:35:15.089723    9928 command_runner.go:130] > pod "busybox-5b5d89c9d6-r46hz" has DeletionTimestamp older than 1 seconds, skipping
	I0122 12:35:15.089723    9928 command_runner.go:130] > node/multinode-484200-m02 drained
	I0122 12:35:15.089815    9928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.0502637s)
	I0122 12:35:15.089815    9928 node.go:108] successfully drained node "m02"
	I0122 12:35:15.090591    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:15.091310    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:15.092634    9928 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0122 12:35:15.092634    9928 round_trippers.go:463] DELETE https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:15.092634    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:15.092634    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:15.092634    9928 round_trippers.go:473]     Content-Type: application/json
	I0122 12:35:15.092634    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:15.111476    9928 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0122 12:35:15.111476    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:15.111476    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Content-Length: 171
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:15 GMT
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Audit-Id: a64fa694-352e-45b8-9bcf-0318f369ed64
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:15.111710    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:15.111843    9928 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484200-m02","kind":"nodes","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169"}}
	I0122 12:35:15.111931    9928 node.go:124] successfully deleted node "m02"
	I0122 12:35:15.111976    9928 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:15.112002    9928 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:15.112074    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02"
	I0122 12:35:15.384211    9928 command_runner.go:130] ! W0122 12:35:15.384666    1366 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0122 12:35:15.929965    9928 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:35:17.725870    9928 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:35:17.725956    9928 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0122 12:35:17.725956    9928 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0122 12:35:17.725956    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:35:17.725956    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:35:17.726059    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:35:17.726059    9928 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0122 12:35:17.726059    9928 command_runner.go:130] > This node has joined the cluster:
	I0122 12:35:17.726110    9928 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0122 12:35:17.726110    9928 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0122 12:35:17.726110    9928 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0122 12:35:17.726110    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02": (2.6140036s)
	I0122 12:35:17.726175    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0122 12:35:18.004036    9928 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0122 12:35:18.249529    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_35_18_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:35:18.410803    9928 command_runner.go:130] > node/multinode-484200-m02 labeled
	I0122 12:35:18.410803    9928 command_runner.go:130] > node/multinode-484200-m03 labeled
	I0122 12:35:18.410803    9928 start.go:306] JoinCluster complete in 16.2641232s
	I0122 12:35:18.410965    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:35:18.410965    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:35:18.424363    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:35:18.436134    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:35:18.436734    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:35:18.436734    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:35:18.436734    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:35:18.436734    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] >  Birth: -
	I0122 12:35:18.437162    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:35:18.437186    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:35:18.480511    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:35:18.884127    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:35:18.885509    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:18.886355    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:18.887081    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:35:18.887167    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:18.887167    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:18.887167    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:18.891251    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:18.891251    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:18.891251    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:18.891251    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:18 GMT
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Audit-Id: e0b95ad5-0b37-45af-92ac-c24fc2f57b93
	I0122 12:35:18.891251    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1788","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:35:18.892279    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:35:18.892279    9928 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:18.892279    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:35:18.905233    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:35:18.930399    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:18.931345    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:18.931787    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:35:18.931787    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:18.931787    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:18.931787    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:18.931787    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:18.936014    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:18.936014    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:18.936014    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:18 GMT
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Audit-Id: 876996f0-b9ca-46a7-817a-3142ec5841dc
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:18.936014    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:18.936014    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:19.438498    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:19.438498    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:19.438498    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:19.438498    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:19.444122    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:19.444395    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:19.444395    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:19.444395    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:19 GMT
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Audit-Id: eaf2baa2-53ae-476b-91b7-acc73c490a5f
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:19.444731    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:19.938707    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:19.938803    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:19.938803    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:19.938803    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:19.943389    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:19.943798    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Audit-Id: 0776c4be-d016-4917-8718-f4e7886d2da2
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:19.943967    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:19.944034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:19.944034    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:19 GMT
	I0122 12:35:19.944224    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.440603    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:20.440794    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:20.440794    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:20.440794    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:20.445687    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:20.445740    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Audit-Id: 18839761-1a9d-4899-98ab-8a0620da126d
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:20.445740    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:20.445740    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:20 GMT
	I0122 12:35:20.446104    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.946690    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:20.946939    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:20.946939    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:20.946939    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:20.950710    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:20.950871    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:20.950871    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:20.950871    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:20 GMT
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Audit-Id: 97d554d0-d632-43de-afdf-1d5919217fcb
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:20.951422    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.951918    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:21.434748    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:21.435023    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:21.435023    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:21.435023    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:21.439669    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:21.439669    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:21.439838    9928 round_trippers.go:580]     Audit-Id: f3cb48d2-e2ea-4b1b-87b5-a166c33fa2a0
	I0122 12:35:21.439892    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:21.439917    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:21.439917    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:21.439947    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:21.439947    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:21 GMT
	I0122 12:35:21.440478    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:21.934454    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:21.934539    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:21.934539    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:21.934539    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:21.937944    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:21.937944    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:21.938761    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:21.938761    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:21.938761    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:21.938761    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:21.938869    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:21 GMT
	I0122 12:35:21.938930    9928 round_trippers.go:580]     Audit-Id: 45500207-bed1-424d-aa90-fd6c15fe8a26
	I0122 12:35:21.939115    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:22.438683    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:22.438683    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:22.438683    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:22.438823    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:22.445113    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:35:22.445113    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:22.445113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:22.445113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:22 GMT
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Audit-Id: 805dd9c1-2307-4a14-a3e7-ebddd8367c6a
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:22.445113    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:22.942919    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:22.943222    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:22.943222    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:22.943323    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:22.946892    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:22.946892    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Audit-Id: d662a3c5-b413-44f6-9df9-ae88efebb24b
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:22.946992    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:22.947123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:22.947123    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:22 GMT
	I0122 12:35:22.947290    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:23.446270    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:23.446410    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:23.446410    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:23.446410    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:23.449352    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:23.450373    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:23.450373    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:23 GMT
	I0122 12:35:23.450373    9928 round_trippers.go:580]     Audit-Id: 2355357e-20aa-4d24-985d-1d6a2f370f50
	I0122 12:35:23.450438    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:23.450438    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:23.450438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:23.450438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:23.450548    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:23.450858    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:23.946793    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:23.946877    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:23.946877    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:23.946942    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:23.949368    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:23.950429    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:23.950429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:23.950429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:23 GMT
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Audit-Id: cf2b837b-515c-4cce-8ac2-f1ca7c886efb
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:23.950629    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:24.432970    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:24.433076    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:24.433076    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:24.433076    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:24.437021    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:24.437021    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:24.437181    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:24.437181    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:24 GMT
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Audit-Id: 09727209-bb96-48f7-853f-cd41f5e11eeb
	I0122 12:35:24.437362    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:24.934468    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:24.934468    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:24.934468    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:24.934468    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:24.940556    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:24.940833    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:24.940833    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:24 GMT
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Audit-Id: 1250f2b5-d70c-4af2-98e6-7d0a1a2f588e
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:24.940914    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:24.941197    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.435326    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:25.435406    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:25.435406    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:25.435406    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:25.441744    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:35:25.441744    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:25.441880    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:25.441880    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:25.441880    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:25.441930    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:25.441930    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:25 GMT
	I0122 12:35:25.441930    9928 round_trippers.go:580]     Audit-Id: a1c8d148-d025-4739-8b8d-943365906567
	I0122 12:35:25.441930    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.937508    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:25.937601    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:25.937601    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:25.937601    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:25.941542    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:25.942106    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Audit-Id: d00c83a7-3a1a-41cc-a5c5-e3ed173b291d
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:25.942106    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:25.942106    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:25.942203    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:25 GMT
	I0122 12:35:25.942266    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.942853    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:26.440628    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:26.440628    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.440628    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.440628    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.444634    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.444950    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.445036    9928 round_trippers.go:580]     Audit-Id: 915e36d7-5387-4580-aadf-4140f6c42993
	I0122 12:35:26.445036    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.445121    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.445212    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.445230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.445230    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.445380    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1965","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0122 12:35:26.445931    9928 node_ready.go:49] node "multinode-484200-m02" has status "Ready":"True"
	I0122 12:35:26.445931    9928 node_ready.go:38] duration metric: took 7.514111s waiting for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:35:26.445931    9928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:35:26.446171    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:35:26.446241    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.446241    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.446241    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.450623    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.451177    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.451177    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Audit-Id: 78c55c47-5380-4d97-839c-a47739f56ed2
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.451321    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.453457    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1967"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83433 chars]
	I0122 12:35:26.457610    9928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.457800    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:35:26.457800    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.457800    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.457800    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.461113    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.461113    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.461113    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.461113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.461190    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Audit-Id: c0d44c98-09ff-4a93-bcd7-42a3adbdc300
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.461484    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0122 12:35:26.462106    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.462106    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.462106    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.462106    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.464766    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.464766    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.464766    9928 round_trippers.go:580]     Audit-Id: 27e94612-5805-4920-a4c9-0f18bd83e7e7
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.465255    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.465255    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.465561    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.465946    9928 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.466039    9928 pod_ready.go:81] duration metric: took 8.3517ms waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.466039    9928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.466177    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:35:26.466177    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.466244    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.466268    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.471307    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:26.471307    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Audit-Id: 4c28e895-4ba3-45c3-a398-1d1e152ced18
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.471307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.471307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.471881    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1771","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0122 12:35:26.472829    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.472906    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.472906    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.472906    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.476108    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:26.476108    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.476108    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Audit-Id: 81e03753-195d-4d8c-9803-56ffe4a2f0e8
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.476108    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.476857    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.477293    9928 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.477293    9928 pod_ready.go:81] duration metric: took 11.2545ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.477341    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.477438    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:35:26.477438    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.477438    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.477438    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.480198    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.480198    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.481058    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Audit-Id: 5b55bdd4-488a-4aa1-941c-fa271909e030
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.481101    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.481101    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.481101    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1757","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0122 12:35:26.482060    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.482164    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.482164    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.482164    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.483838    9928 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0122 12:35:26.483838    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.483838    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.484824    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.484824    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Audit-Id: f5aafc79-d111-4fdb-9c12-db65a09d4bfe
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.484824    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.484824    9928 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.484824    9928 pod_ready.go:81] duration metric: took 7.4522ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.484824    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.485637    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:35:26.485637    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.485637    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.485637    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.488732    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:26.488732    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.488732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.488732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.489588    9928 round_trippers.go:580]     Audit-Id: 584aaa57-9228-4112-b620-95d2de58f979
	I0122 12:35:26.489682    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1769","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0122 12:35:26.490490    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.490490    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.490490    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.490490    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.493111    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.493413    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.493413    9928 round_trippers.go:580]     Audit-Id: ef3f1cbf-9c2c-4471-a2c2-f18b32c43e96
	I0122 12:35:26.493413    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.493478    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.493529    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.493578    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.493578    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.493578    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.494285    9928 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.494329    9928 pod_ready.go:81] duration metric: took 9.5051ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.494329    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.644301    9928 request.go:629] Waited for 149.7061ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:35:26.644301    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:35:26.644301    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.644529    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.644529    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.648595    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.648595    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.649604    9928 round_trippers.go:580]     Audit-Id: fc5145e8-15f9-4e9b-aa76-41ad10153172
	I0122 12:35:26.649604    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.649638    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.649638    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.649638    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.649638    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.649809    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1740","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0122 12:35:26.848951    9928 request.go:629] Waited for 198.0154ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.849059    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.849059    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.849059    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.849219    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.855031    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:26.855031    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.855031    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.855031    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Audit-Id: 6ef9a331-dcec-429b-b769-1094a4fb83a8
	I0122 12:35:26.855193    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.855425    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.856462    9928 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.856462    9928 pod_ready.go:81] duration metric: took 362.1314ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.856644    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.053024    9928 request.go:629] Waited for 195.8904ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:35:27.053311    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:35:27.053522    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.053576    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.053576    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.061217    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:35:27.061217    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.061217    9928 round_trippers.go:580]     Audit-Id: 6ad44ce7-2adf-4b1b-9521-3dd81b76ef37
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.061516    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.061516    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.061806    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1824","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5976 chars]
	I0122 12:35:27.240773    9928 request.go:629] Waited for 178.265ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:35:27.240994    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:35:27.240994    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.240994    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.241085    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.244894    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:27.245355    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.245355    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Audit-Id: 81c8ec66-3696-4977-9ae5-a542752b6b21
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.245407    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.245460    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.248156    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1946","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4393 chars]
	I0122 12:35:27.248356    9928 pod_ready.go:97] node "multinode-484200-m03" hosting pod "kube-proxy-mxd9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200-m03" has status "Ready":"Unknown"
	I0122 12:35:27.248356    9928 pod_ready.go:81] duration metric: took 391.7105ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	E0122 12:35:27.248356    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200-m03" hosting pod "kube-proxy-mxd9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200-m03" has status "Ready":"Unknown"
	I0122 12:35:27.248356    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.444546    9928 request.go:629] Waited for 196.019ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:35:27.444613    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:35:27.444613    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.444613    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.444613    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.450231    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:27.450468    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.450468    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.450468    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Audit-Id: 6a1e5f56-e988-4402-a362-161dd3a9935e
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.450546    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"1947","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:35:27.646434    9928 request.go:629] Waited for 195.1148ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:27.646751    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:27.646751    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.646751    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.646751    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.654639    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:35:27.654798    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.654798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.654859    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.654859    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Audit-Id: bc45893b-4b99-49af-9bac-061f9306a9c1
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.654859    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1969","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3806 chars]
	I0122 12:35:27.655625    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:27.655625    9928 pod_ready.go:81] duration metric: took 407.2669ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.655625    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.849781    9928 request.go:629] Waited for 193.751ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:35:27.849837    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:35:27.849837    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.849837    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.849837    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.854262    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:27.854262    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Audit-Id: 15954030-35bd-41ab-892c-650fef65d433
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.854262    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.854262    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.854262    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1753","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0122 12:35:28.052801    9928 request.go:629] Waited for 197.2338ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:28.052801    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:28.052801    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:28.052801    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:28.052801    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:28.058438    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:28.059172    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Audit-Id: 167302ba-7f6e-4ab1-9c46-11eabe879fb3
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:28.059172    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:28.059172    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:28 GMT
	I0122 12:35:28.059172    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:28.059957    9928 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:28.059957    9928 pod_ready.go:81] duration metric: took 404.3303ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:28.059957    9928 pod_ready.go:38] duration metric: took 1.6140197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:35:28.059957    9928 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:35:28.080523    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:35:28.101342    9928 system_svc.go:56] duration metric: took 41.3851ms WaitForService to wait for kubelet.
	I0122 12:35:28.101471    9928 kubeadm.go:581] duration metric: took 9.2091519s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:35:28.101471    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:35:28.255369    9928 request.go:629] Waited for 153.7017ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes
	I0122 12:35:28.255492    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:35:28.255492    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:28.255492    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:28.255492    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:28.260429    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:28.260429    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:28 GMT
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Audit-Id: 9f7e670a-796d-4a27-81d5-e4328198c06f
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:28.260429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:28.260429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:28.261681    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1973"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15475 chars]
	I0122 12:35:28.262852    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.262933    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.262933    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.263032    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.263050    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.263050    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.263050    9928 node_conditions.go:105] duration metric: took 161.578ms to run NodePressure ...
	I0122 12:35:28.263050    9928 start.go:228] waiting for startup goroutines ...
	I0122 12:35:28.263120    9928 start.go:242] writing updated cluster config ...
	I0122 12:35:28.279657    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:35:28.279657    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:35:28.284547    9928 out.go:177] * Starting worker node multinode-484200-m03 in cluster multinode-484200
	I0122 12:35:28.285111    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:35:28.285111    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:35:28.285111    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:35:28.285111    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:35:28.285779    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:35:28.292491    9928 start.go:365] acquiring machines lock for multinode-484200-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:35:28.292491    9928 start.go:369] acquired machines lock for "multinode-484200-m03" in 0s
	I0122 12:35:28.293459    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:35:28.293459    9928 fix.go:54] fixHost starting: m03
	I0122 12:35:28.293459    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:30.432135    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:35:30.432282    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:30.432282    9928 fix.go:102] recreateIfNeeded on multinode-484200-m03: state=Stopped err=<nil>
	W0122 12:35:30.432282    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:35:30.433783    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200-m03" ...
	I0122 12:35:30.434229    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200-m03
	I0122 12:35:32.949069    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:32.949069    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:32.949264    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:35:32.949264    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:37.781335    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:37.781335    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:38.784839    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:41.073052    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:41.073127    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:41.073226    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:43.663502    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:43.663502    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:44.666972    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:46.906535    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:46.906794    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:46.906794    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:49.514914    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:49.514914    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:50.520755    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:52.765806    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:52.765806    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:52.765891    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:55.390435    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:55.390435    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:56.393757    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:58.683614    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:58.683873    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:58.683921    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:01.318946    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:01.319066    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:01.322354    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:06.109581    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:06.109660    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:06.110021    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:36:06.112549    9928 machine.go:88] provisioning docker machine ...
	I0122 12:36:06.112631    9928 buildroot.go:166] provisioning hostname "multinode-484200-m03"
	I0122 12:36:06.112631    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:08.278582    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:08.278582    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:08.278772    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:10.876025    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:10.876229    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:10.884287    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:10.885198    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:10.885198    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200-m03 && echo "multinode-484200-m03" | sudo tee /etc/hostname
	I0122 12:36:11.049385    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200-m03
	
	I0122 12:36:11.049385    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:13.249746    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:13.249805    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:13.249805    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:15.829391    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:15.829391    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:15.835464    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:15.835764    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:15.836348    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:36:15.990546    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:36:15.990626    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:36:15.990626    9928 buildroot.go:174] setting up certificates
	I0122 12:36:15.990626    9928 provision.go:83] configureAuth start
	I0122 12:36:15.990724    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:18.137922    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:18.138227    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:18.138227    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:20.716189    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:20.716248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:20.716323    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:22.836723    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:22.837026    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:22.837026    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:25.417189    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:25.417189    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:25.417278    9928 provision.go:138] copyHostCerts
	I0122 12:36:25.417278    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:36:25.417278    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:36:25.417278    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:36:25.418191    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:36:25.419251    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:36:25.419251    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:36:25.419251    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:36:25.420069    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:36:25.420919    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:36:25.420919    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:36:25.420919    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:36:25.421743    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:36:25.422644    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200-m03 san=[172.23.181.87 172.23.181.87 localhost 127.0.0.1 minikube multinode-484200-m03]
	I0122 12:36:25.686530    9928 provision.go:172] copyRemoteCerts
	I0122 12:36:25.698886    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:36:25.698886    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:30.473383    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:30.473383    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:30.473383    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:36:30.582960    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8837906s)
	I0122 12:36:30.583143    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:36:30.583462    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:36:30.633621    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:36:30.633621    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0122 12:36:30.672769    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:36:30.672913    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 12:36:30.712441    9928 provision.go:86] duration metric: configureAuth took 14.7216588s
	I0122 12:36:30.712501    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:36:30.713083    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:36:30.713122    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:32.876696    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:32.876696    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:32.876911    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:35.449271    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:35.449431    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:35.455322    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:35.456018    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:35.456018    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:36:35.598102    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:36:35.598102    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:36:35.598360    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:36:35.598424    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:37.739597    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:37.739597    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:37.739816    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:40.277829    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:40.277829    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:40.282800    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:40.283619    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:40.284246    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.191.211"
	Environment="NO_PROXY=172.23.191.211,172.23.187.210"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:36:40.448278    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.191.211
	Environment=NO_PROXY=172.23.191.211,172.23.187.210
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:36:40.448278    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:42.596863    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:42.597111    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:42.597229    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:45.146049    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:45.146049    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:45.152084    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:45.152793    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:45.152793    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:36:46.215147    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:36:46.215256    9928 machine.go:91] provisioned docker machine in 40.1024956s
	I0122 12:36:46.215256    9928 start.go:300] post-start starting for "multinode-484200-m03" (driver="hyperv")
	I0122 12:36:46.215256    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:36:46.229048    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:36:46.229048    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:48.385994    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:48.386409    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:48.386492    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:50.931182    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:50.931182    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:50.931796    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:36:51.042274    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8132052s)
	I0122 12:36:51.055263    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:36:51.061736    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:36:51.061736    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:36:51.061736    9928 command_runner.go:130] > ID=buildroot
	I0122 12:36:51.061736    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:36:51.061736    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:36:51.062063    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:36:51.062155    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:36:51.062714    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:36:51.064049    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:36:51.064112    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:36:51.078089    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:36:51.095169    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:36:51.138950    9928 start.go:303] post-start completed in 4.9236723s
	I0122 12:36:51.140083    9928 fix.go:56] fixHost completed within 1m22.8462679s
	I0122 12:36:51.140188    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:53.316024    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:53.316024    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:53.316134    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:55.963590    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:55.963590    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:55.968500    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:55.969250    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:55.969774    9928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 12:36:56.110828    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705927016.111564075
	
	I0122 12:36:56.110929    9928 fix.go:206] guest clock: 1705927016.111564075
	I0122 12:36:56.110929    9928 fix.go:219] Guest: 2024-01-22 12:36:56.111564075 +0000 UTC Remote: 2024-01-22 12:36:51.1400838 +0000 UTC m=+360.951539301 (delta=4.971480275s)
	I0122 12:36:56.111024    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:58.328820    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:58.328984    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:58.329245    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:00.952705    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:00.952705    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:00.958346    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:37:00.959122    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:37:00.959122    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705927016
	I0122 12:37:01.108609    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:36:56 UTC 2024
	
	I0122 12:37:01.108609    9928 fix.go:226] clock set: Mon Jan 22 12:36:56 UTC 2024
	 (err=<nil>)
	I0122 12:37:01.108609    9928 start.go:83] releasing machines lock for "multinode-484200-m03", held for 1m32.8157192s
	I0122 12:37:01.108609    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:03.310877    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:03.311114    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:03.311487    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:05.871461    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:05.871461    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:05.872254    9928 out.go:177] * Found network options:
	I0122 12:37:05.873597    9928 out.go:177]   - NO_PROXY=172.23.191.211,172.23.187.210
	W0122 12:37:05.874429    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.874429    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:37:05.875426    9928 out.go:177]   - NO_PROXY=172.23.191.211,172.23.187.210
	W0122 12:37:05.876068    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.876068    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.877455    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.877455    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:37:05.880332    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:37:05.880405    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:05.892453    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:37:05.892453    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:08.165977    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:10.887903    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:10.887903    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:10.888295    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:37:10.911607    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:10.911607    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:10.911607    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:37:11.091560    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:37:11.091560    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2112053s)
	I0122 12:37:11.091560    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0122 12:37:11.091560    9928 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1990848s)
	W0122 12:37:11.091560    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:37:11.110495    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:37:11.144094    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:37:11.144094    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:37:11.144094    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:37:11.144094    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:37:11.179213    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:37:11.192555    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:37:11.222338    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:37:11.239158    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:37:11.253002    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:37:11.282304    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:37:11.315305    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:37:11.347421    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:37:11.380523    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:37:11.412609    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:37:11.445990    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:37:11.464475    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:37:11.476987    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:37:11.506874    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:11.677863    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:37:11.704866    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:37:11.717822    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:37:11.737819    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Unit]
	I0122 12:37:11.737819    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:37:11.737819    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:37:11.737819    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:37:11.737819    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:37:11.737819    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:37:11.737819    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Service]
	I0122 12:37:11.737819    9928 command_runner.go:130] > Type=notify
	I0122 12:37:11.737819    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:37:11.737819    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211
	I0122 12:37:11.737819    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211,172.23.187.210
	I0122 12:37:11.737819    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:37:11.737819    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:37:11.737819    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:37:11.737819    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:37:11.737819    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:37:11.737819    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:37:11.737819    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecStart=
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:37:11.737819    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:37:11.737819    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:37:11.737819    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:37:11.737819    9928 command_runner.go:130] > Delegate=yes
	I0122 12:37:11.737819    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:37:11.737819    9928 command_runner.go:130] > KillMode=process
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Install]
	I0122 12:37:11.737819    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:37:11.750830    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:37:11.785606    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:37:11.821592    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:37:11.854639    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:37:11.892073    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:37:11.941908    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:37:11.960913    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:37:11.994912    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:37:12.008898    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:37:12.014562    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:37:12.028245    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:37:12.043250    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:37:12.086847    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:37:12.265839    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:37:12.420238    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:37:12.420305    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:37:12.465353    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:12.629325    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:37:14.198219    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5688872s)
	I0122 12:37:14.209976    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:37:14.243229    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:37:14.273038    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:37:14.445198    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:37:14.618187    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:14.785421    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:37:14.822324    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:37:14.859184    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:15.028776    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:37:15.146791    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:37:15.157786    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:37:15.166777    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:37:15.166777    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:37:15.166777    9928 command_runner.go:130] > Device: 16h/22d	Inode: 931         Links: 1
	I0122 12:37:15.166777    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:37:15.166777    9928 command_runner.go:130] > Access: 2024-01-22 12:37:15.053038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] > Modify: 2024-01-22 12:37:15.053038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] > Change: 2024-01-22 12:37:15.057038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] >  Birth: -
	I0122 12:37:15.166777    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:37:15.177823    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:37:15.183390    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:37:15.196590    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:37:15.270314    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:37:15.270314    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:37:15.280247    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:37:15.323834    9928 command_runner.go:130] > 24.0.7
	I0122 12:37:15.334156    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:37:15.370011    9928 command_runner.go:130] > 24.0.7
	I0122 12:37:15.371330    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:37:15.372081    9928 out.go:177]   - env NO_PROXY=172.23.191.211
	I0122 12:37:15.372984    9928 out.go:177]   - env NO_PROXY=172.23.191.211,172.23.187.210
	I0122 12:37:15.373622    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:37:15.380710    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:37:15.380710    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:37:15.392669    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:37:15.397938    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:37:15.421671    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.181.87
	I0122 12:37:15.421671    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:37:15.421671    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:37:15.422637    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:37:15.423620    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:37:15.424618    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.425635    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.425635    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.426628    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:37:15.468687    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:37:15.511257    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:37:15.552811    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:37:15.595807    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:37:15.637343    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:37:15.680156    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:37:15.734911    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:37:15.742183    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:37:15.760611    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:37:15.791578    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.797576    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.797576    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.808576    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.815777    9928 command_runner.go:130] > b5213941
	I0122 12:37:15.829589    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:37:15.857519    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:37:15.888493    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.893509    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.894472    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.909595    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.915584    9928 command_runner.go:130] > 51391683
	I0122 12:37:15.929840    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:37:15.956756    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:37:15.988094    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.994910    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.994910    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:37:16.010663    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:37:16.017968    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:37:16.030527    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:37:16.067913    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:37:16.073536    9928 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:37:16.074419    9928 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:37:16.083967    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:37:16.119629    9928 command_runner.go:130] > cgroupfs
	I0122 12:37:16.120270    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:37:16.120270    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:37:16.120270    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:37:16.120270    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.181.87 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.181.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:37:16.120270    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.181.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200-m03"
	  kubeletExtraArgs:
	    node-ip: 172.23.181.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:37:16.120803    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.181.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:37:16.133785    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:37:16.152856    9928 command_runner.go:130] > kubeadm
	I0122 12:37:16.152920    9928 command_runner.go:130] > kubectl
	I0122 12:37:16.152978    9928 command_runner.go:130] > kubelet
	I0122 12:37:16.153034    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:37:16.167389    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0122 12:37:16.185773    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0122 12:37:16.212664    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:37:16.256467    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:37:16.263066    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:37:16.280518    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:37:16.281389    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:37:16.281677    9928 start.go:304] JoinCluster: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:37:16.281677    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0122 12:37:16.281677    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:37:18.412690    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:18.412837    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:18.412837    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:20.957973    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:37:20.957973    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:20.958530    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:37:21.159076    9928 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:37:21.159291    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8775932s)
	I0122 12:37:21.159410    9928 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:21.159510    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:37:21.173139    9928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0122 12:37:21.173139    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:37:23.308789    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:23.308876    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:23.308930    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:25.887213    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:37:25.887213    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:25.887425    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:37:26.073065    9928 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0122 12:37:26.144063    9928 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zs4rk, kube-system/kube-proxy-mxd9x
	I0122 12:37:26.146074    9928 command_runner.go:130] > node/multinode-484200-m03 cordoned
	I0122 12:37:26.146074    9928 command_runner.go:130] > node/multinode-484200-m03 drained
	I0122 12:37:26.146793    9928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.9736323s)
	I0122 12:37:26.146793    9928 node.go:108] successfully drained node "m03"
	I0122 12:37:26.147788    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:26.148598    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:26.149614    9928 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0122 12:37:26.149736    9928 round_trippers.go:463] DELETE https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:26.149736    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:26.149795    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:26.149795    9928 round_trippers.go:473]     Content-Type: application/json
	I0122 12:37:26.149795    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:26.166608    9928 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0122 12:37:26.166608    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:26.167047    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:26.167047    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:26.167047    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:26.167118    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:26.167159    9928 round_trippers.go:580]     Content-Length: 171
	I0122 12:37:26.167188    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:26 GMT
	I0122 12:37:26.167188    9928 round_trippers.go:580]     Audit-Id: 69d0ac7c-b493-49d9-a02b-2b046b2aed73
	I0122 12:37:26.167188    9928 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484200-m03","kind":"nodes","uid":"1d1fc222-9802-4017-8aee-b3310ac58592"}}
	I0122 12:37:26.167188    9928 node.go:124] successfully deleted node "m03"
	I0122 12:37:26.167188    9928 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:26.167188    9928 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:26.167188    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m03"
	I0122 12:37:26.525562    9928 command_runner.go:130] ! W0122 12:37:26.526895    1350 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0122 12:37:27.218084    9928 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:37:29.034781    9928 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:37:29.035376    9928 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0122 12:37:29.035376    9928 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0122 12:37:29.035376    9928 command_runner.go:130] > This node has joined the cluster:
	I0122 12:37:29.035376    9928 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0122 12:37:29.035376    9928 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0122 12:37:29.035376    9928 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0122 12:37:29.035376    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m03": (2.8681756s)
	I0122 12:37:29.035376    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0122 12:37:29.458711    9928 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0122 12:37:29.472501    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_37_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:37:29.609648    9928 command_runner.go:130] > node/multinode-484200-m02 labeled
	I0122 12:37:29.609837    9928 command_runner.go:130] > node/multinode-484200-m03 labeled
	I0122 12:37:29.609947    9928 start.go:306] JoinCluster complete in 13.3282118s
	I0122 12:37:29.609947    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:37:29.609947    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:37:29.623111    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:37:29.632099    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:37:29.632099    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:37:29.632099    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:37:29.632099    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:37:29.632099    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] >  Birth: -
	I0122 12:37:29.632099    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:37:29.632099    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:37:29.680877    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:37:30.067201    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:37:30.068402    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:30.069148    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:30.069919    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:37:30.069919    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.069919    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.069919    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.076883    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:30.076883    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.076883    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.076883    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Audit-Id: 288f2d34-7add-4a78-8187-5db203f0e6c3
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.078250    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1788","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:37:30.078250    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:37:30.078250    9928 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:30.078921    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:37:30.094720    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:37:30.115078    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:30.115979    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:30.117386    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200-m03" to be "Ready" ...
	I0122 12:37:30.117386    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:30.117386    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.117386    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.117386    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.124734    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:37:30.124766    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.124766    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.124766    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Audit-Id: 5fcc38ce-e373-4085-a975-5b77a7c30eca
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.124766    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:30.624292    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:30.624292    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.624292    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.624292    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.628884    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:30.628884    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.628884    9928 round_trippers.go:580]     Audit-Id: f151416e-8a39-4a0f-9f57-de8b113beabd
	I0122 12:37:30.628884    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.629083    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.629083    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.629134    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.629163    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.629512    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:31.127136    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:31.127136    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:31.127136    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:31.127136    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:31.131504    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:31.131577    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:31.131577    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:31.131577    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:31 GMT
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Audit-Id: dc134a76-41b6-4778-a252-88d95fb4a03d
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:31.131577    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:31.630947    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:31.630947    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:31.630947    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:31.630947    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:31.635799    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:31.635874    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:31.635874    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:31.635874    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:31.635973    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:31 GMT
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Audit-Id: 91618188-936b-4721-a34f-e80178259475
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:31.636318    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:32.131828    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:32.131828    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:32.131828    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:32.131828    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:32.135453    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:32.135923    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Audit-Id: ec3966c9-ba55-4cd1-84c8-7b142e6c7ec5
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:32.135923    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:32.135999    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:32.135999    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:32 GMT
	I0122 12:37:32.136028    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:32.136603    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:32.619040    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:32.619234    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:32.619234    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:32.619341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:32.622892    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:32.622892    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Audit-Id: b8f9bdd9-a462-45fd-82a6-273436a9992d
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:32.624008    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:32.624008    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:32.624008    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:32 GMT
	I0122 12:37:32.624283    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:33.123418    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:33.123503    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:33.123503    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:33.123503    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:33.127126    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:33.127126    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Audit-Id: 8f330dfd-ad2d-42d5-9420-694ce4a048f4
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:33.127126    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:33.127498    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:33.127498    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:33 GMT
	I0122 12:37:33.127646    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:33.626990    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:33.627075    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:33.627075    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:33.627197    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:33.630595    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:33.630595    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:33.630595    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:33.630595    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:33 GMT
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Audit-Id: ecc26cb4-2cba-49db-997e-6c8bd707d1a8
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:33.630595    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.120803    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:34.120910    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:34.120910    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:34.120983    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:34.124786    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:34.125846    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:34.125846    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:34.125846    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:34 GMT
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Audit-Id: 0cf7d42c-9e8a-4d6e-a49d-f4455114bd4e
	I0122 12:37:34.126070    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.624178    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:34.624299    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:34.624299    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:34.624366    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:34.628778    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:34.628836    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:34.628836    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:34.628836    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:34.628836    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:34.628836    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:34 GMT
	I0122 12:37:34.628919    9928 round_trippers.go:580]     Audit-Id: ade9644b-1bcf-41b7-8f72-29026b0bdf74
	I0122 12:37:34.628919    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:34.629141    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.629141    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:35.125135    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:35.125135    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:35.125232    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:35.125232    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:35.129053    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:35.129053    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:35.129053    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:35.129053    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:35 GMT
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Audit-Id: fa14c3b5-b7c9-4acb-bbd5-1485d019d2a1
	I0122 12:37:35.130176    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:35.628576    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:35.628813    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:35.628813    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:35.628813    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:35.632074    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:35.632074    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:35.632074    9928 round_trippers.go:580]     Audit-Id: 2f0f4919-e46f-4889-82ac-f06cafe3d8f5
	I0122 12:37:35.632074    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:35.633166    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:35.633166    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:35.633166    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:35.633166    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:35 GMT
	I0122 12:37:35.633530    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:36.128256    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:36.128345    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:36.128345    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:36.128345    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:36.134749    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:36.134749    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:36.134749    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:36.134749    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:36.134749    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:36.135320    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:36.135351    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:36 GMT
	I0122 12:37:36.135425    9928 round_trippers.go:580]     Audit-Id: 3022fa39-c6d8-40df-bdde-be00d8f96151
	I0122 12:37:36.136007    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:36.618136    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:36.618294    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:36.618294    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:36.618294    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:36.623533    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:37:36.623829    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:36.623829    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:36.623829    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:36 GMT
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Audit-Id: 08009933-fd22-4427-b84d-c6feec4ee5dc
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:36.624113    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:37.118318    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:37.118387    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:37.118387    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:37.118387    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:37.123037    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:37.123037    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:37.123037    9928 round_trippers.go:580]     Audit-Id: 2459e37a-dc99-42f5-a625-3836fd5c9b09
	I0122 12:37:37.123037    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:37.123724    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:37.123724    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:37.123724    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:37.123724    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:37 GMT
	I0122 12:37:37.123863    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:37.124277    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:37.619323    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:37.619408    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:37.619408    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:37.619488    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:37.623273    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:37.624087    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:37.624087    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:37.624087    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:37 GMT
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Audit-Id: 9de9f837-6eaa-4986-b8b9-b72ed092fe22
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:37.624425    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:38.126302    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:38.126302    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:38.126302    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:38.126302    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:38.131239    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:38.131307    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:38.131307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:38.131307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:38 GMT
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Audit-Id: 864dc71b-7c7b-45d8-82cc-3a9b1ca9ee4e
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:38.132128    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:38.629131    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:38.629238    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:38.629238    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:38.629238    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:38.633202    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:38.633202    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:38.633202    9928 round_trippers.go:580]     Audit-Id: 0839a7a1-4c67-4179-8a8f-02698e59aef3
	I0122 12:37:38.633362    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:38.633362    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:38.633420    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:38.633420    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:38.633478    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:38 GMT
	I0122 12:37:38.633607    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:39.132242    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:39.132242    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:39.132337    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:39.132337    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:39.138661    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:39.138661    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Audit-Id: 2076114a-0d2e-492f-9547-ef7549ba0454
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:39.139196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:39.139196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:39.139265    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:39 GMT
	I0122 12:37:39.139593    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:39.140047    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:39.632359    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:39.632502    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:39.632502    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:39.632502    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:39.636723    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:39.636723    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:39.636894    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:39.636894    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:39 GMT
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Audit-Id: 2b90138f-3de5-4412-a5fc-cf3496709651
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:39.637125    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:39.637224    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:40.121889    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:40.121889    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:40.121889    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:40.121889    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:40.125534    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:40.126419    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:40.126419    9928 round_trippers.go:580]     Audit-Id: d8434f29-a3d9-4a59-ba5b-8ad14a69b0b2
	I0122 12:37:40.126419    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:40.126486    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:40.126515    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:40.126515    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:40.126515    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:40 GMT
	I0122 12:37:40.126680    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:40.625175    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:40.625270    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:40.625270    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:40.625270    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:40.629592    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:40.629679    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:40.629679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:40.629679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:40 GMT
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Audit-Id: a6a354b0-7516-46ae-909c-2e8cfad1c3e0
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:40.629938    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.127025    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:41.127117    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:41.127117    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:41.127117    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:41.133535    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:41.133571    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Audit-Id: c607a285-415e-46b3-988e-b4bdaee7c771
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:41.133671    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:41.133671    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:41.133671    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:41 GMT
	I0122 12:37:41.134278    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.625300    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:41.625300    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:41.625300    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:41.625300    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:41.628901    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:41.629900    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:41.630037    9928 round_trippers.go:580]     Audit-Id: 36eddb3e-ff21-40d0-9072-0e366e536f6c
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:41.630058    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:41.630058    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:41 GMT
	I0122 12:37:41.630909    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.631510    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:42.125702    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:42.125702    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:42.125879    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:42.125879    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:42.130107    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:42.130196    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Audit-Id: 99e86887-a766-4c42-88f2-0305362f9a8b
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:42.130196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:42.130196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:42.130307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:42 GMT
	I0122 12:37:42.130307    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:42.628133    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:42.628133    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:42.628133    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:42.628133    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:42.628133    9928 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0122 12:37:42.628133    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:42.628133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:42 GMT
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Audit-Id: c450f50d-ca7d-44ed-8926-fff98023e7c4
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:42.628133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:42.628133    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.128191    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:43.128191    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:43.128341    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:43.128341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:43.132581    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:43.132606    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Audit-Id: a6063ace-e36e-42b9-86c5-7bbd339a98a2
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:43.132606    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:43.132606    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:43 GMT
	I0122 12:37:43.133029    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.631013    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:43.631106    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:43.631106    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:43.631187    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:43.634427    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:43.634427    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Audit-Id: 46f00bc3-8288-489f-bebd-fda086581185
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:43.634427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:43.634427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:43.634979    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:43 GMT
	I0122 12:37:43.635222    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.635695    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:44.131951    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:44.132022    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:44.132022    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:44.132098    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:44.139901    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:37:44.139972    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:44.139972    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:44.139972    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:44 GMT
	I0122 12:37:44.140037    9928 round_trippers.go:580]     Audit-Id: 45acc71c-ddfa-4fb3-8200-9b3bbff9d04c
	I0122 12:37:44.140037    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:44.140084    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:44.140112    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:44.140473    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:44.632039    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:44.632039    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:44.632039    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:44.632293    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:44.637149    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:44.637511    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:44.637511    9928 round_trippers.go:580]     Audit-Id: b0e13496-e994-4fab-a98b-f228da201735
	I0122 12:37:44.637661    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:44.637686    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:44.637686    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:44.637686    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:44.637686    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:44 GMT
	I0122 12:37:44.637686    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:45.132817    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:45.132817    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:45.132817    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:45.132817    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:45.136234    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:45.136234    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:45.136797    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:45.136797    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:45 GMT
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Audit-Id: 1da10034-ed00-41cd-a9c4-7b3a6c3e0e17
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:45.137016    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:45.631001    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:45.631001    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:45.631001    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:45.631001    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:45.636052    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:45.636123    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:45.636123    9928 round_trippers.go:580]     Audit-Id: 380d907e-26e3-4e62-9239-6818d8ee8cff
	I0122 12:37:45.636123    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:45.636123    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:45.636123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:45.636205    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:45.636205    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:45 GMT
	I0122 12:37:45.636441    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:45.636984    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:46.130731    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:46.130812    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:46.130812    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:46.130812    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:46.135408    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:46.135408    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:46.135408    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:46.135624    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:46.135624    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:46 GMT
	I0122 12:37:46.135624    9928 round_trippers.go:580]     Audit-Id: 4707dd7c-d4dc-468c-a580-b5d2727a16de
	I0122 12:37:46.135755    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:46.135755    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:46.135901    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:46.622896    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:46.623094    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:46.623094    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:46.623094    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:46.627521    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:46.627521    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:46.627521    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:46.628019    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:46.628019    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:46.628019    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:46.628019    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:46 GMT
	I0122 12:37:46.628019    9928 round_trippers.go:580]     Audit-Id: 27cb4782-6119-453a-959e-c0f086f20fc0
	I0122 12:37:46.628189    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:47.122154    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:47.122154    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:47.122154    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:47.122154    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:47.126262    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:47.126322    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:47.126322    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:47.126322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:47.126322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:47.126322    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:47 GMT
	I0122 12:37:47.126322    9928 round_trippers.go:580]     Audit-Id: 437b5801-85b6-4d27-a936-b5335cdc0f84
	I0122 12:37:47.126322    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:47.126596    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:47.622922    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:47.623049    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:47.623049    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:47.623049    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:47.627438    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:47.627438    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:47.627438    9928 round_trippers.go:580]     Audit-Id: 1c903d00-fb53-4e96-b7e0-9067f3614514
	I0122 12:37:47.627438    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:47.627438    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:47.627438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:47.627438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:47.627438    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:47 GMT
	I0122 12:37:47.627438    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:48.128876    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:48.128876    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:48.128876    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:48.128876    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:48.134717    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:37:48.134795    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:48.134795    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:48 GMT
	I0122 12:37:48.134889    9928 round_trippers.go:580]     Audit-Id: 3d2ebe51-5a3a-4118-aa56-98f6dbcaccf6
	I0122 12:37:48.134973    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:48.134973    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:48.135043    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:48.135061    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:48.135391    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:48.136138    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:48.618542    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:48.618629    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:48.618629    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:48.618629    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:48.622014    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:48.622424    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:48.622424    9928 round_trippers.go:580]     Audit-Id: b9d09049-0d5b-4564-81a5-1f0cb47d0591
	I0122 12:37:48.622424    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:48.622424    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:48.622424    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:48.622424    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:48.622517    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:48 GMT
	I0122 12:37:48.622734    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-484200" : exit status 1
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-484200
multinode_test.go:328: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-484200: context deadline exceeded (188.8µs)
multinode_test.go:330: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-484200" : context deadline exceeded
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-484200	172.23.177.163
multinode-484200-m02	172.23.185.74
multinode-484200-m03	172.23.187.157

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-484200 -n multinode-484200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-484200 -n multinode-484200: (12.3840016s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 logs -n 25: (9.0046135s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-484200 cp testdata\cp-test.txt                                                                                 | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:21 UTC | 22 Jan 24 12:22 UTC |
	|         | multinode-484200-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:22 UTC |
	|         | multinode-484200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:22 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:22 UTC |
	|         | multinode-484200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:22 UTC |
	|         | multinode-484200:/home/docker/cp-test_multinode-484200-m02_multinode-484200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:22 UTC |
	|         | multinode-484200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n multinode-484200 sudo cat                                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:22 UTC | 22 Jan 24 12:23 UTC |
	|         | /home/docker/cp-test_multinode-484200-m02_multinode-484200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:23 UTC |
	|         | multinode-484200-m03:/home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:23 UTC |
	|         | multinode-484200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n multinode-484200-m03 sudo cat                                                                    | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:23 UTC |
	|         | /home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp testdata\cp-test.txt                                                                                 | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:23 UTC |
	|         | multinode-484200-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:23 UTC |
	|         | multinode-484200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:23 UTC | 22 Jan 24 12:24 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:24 UTC | 22 Jan 24 12:24 UTC |
	|         | multinode-484200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:24 UTC | 22 Jan 24 12:24 UTC |
	|         | multinode-484200:/home/docker/cp-test_multinode-484200-m03_multinode-484200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:24 UTC | 22 Jan 24 12:24 UTC |
	|         | multinode-484200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n multinode-484200 sudo cat                                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:24 UTC | 22 Jan 24 12:24 UTC |
	|         | /home/docker/cp-test_multinode-484200-m03_multinode-484200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt                                                        | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:24 UTC | 22 Jan 24 12:25 UTC |
	|         | multinode-484200-m02:/home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n                                                                                                  | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:25 UTC | 22 Jan 24 12:25 UTC |
	|         | multinode-484200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-484200 ssh -n multinode-484200-m02 sudo cat                                                                    | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:25 UTC | 22 Jan 24 12:25 UTC |
	|         | /home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-484200 node stop m03                                                                                           | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:25 UTC | 22 Jan 24 12:25 UTC |
	| node    | multinode-484200 node start                                                                                              | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:26 UTC | 22 Jan 24 12:28 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-484200                                                                                                 | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:29 UTC |                     |
	| stop    | -p multinode-484200                                                                                                      | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:29 UTC | 22 Jan 24 12:30 UTC |
	| start   | -p multinode-484200                                                                                                      | multinode-484200 | minikube7\jenkins | v1.32.0 | 22 Jan 24 12:30 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 12:30:50
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 12:30:50.363371    9928 out.go:296] Setting OutFile to fd 1004 ...
	I0122 12:30:50.364157    9928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:30:50.364157    9928 out.go:309] Setting ErrFile to fd 692...
	I0122 12:30:50.364157    9928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:30:50.385941    9928 out.go:303] Setting JSON to false
	I0122 12:30:50.388673    9928 start.go:128] hostinfo: {"hostname":"minikube7","uptime":46019,"bootTime":1705880631,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 12:30:50.388673    9928 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 12:30:50.390824    9928 out.go:177] * [multinode-484200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 12:30:50.391190    9928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:30:50.391190    9928 notify.go:220] Checking for updates...
	I0122 12:30:50.392543    9928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 12:30:50.393181    9928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 12:30:50.394178    9928 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 12:30:50.394537    9928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 12:30:50.396604    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:30:50.396934    9928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 12:30:55.836587    9928 out.go:177] * Using the hyperv driver based on existing profile
	I0122 12:30:55.837465    9928 start.go:298] selected driver: hyperv
	I0122 12:30:55.837465    9928 start.go:902] validating driver "hyperv" against &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false in
accel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:30:55.837802    9928 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 12:30:55.894326    9928 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 12:30:55.894326    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:30:55.894326    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:30:55.894326    9928 start_flags.go:321] config:
	{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.177.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:30:55.894994    9928 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:30:55.899241    9928 out.go:177] * Starting control plane node multinode-484200 in cluster multinode-484200
	I0122 12:30:55.899883    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:30:55.900599    9928 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 12:30:55.900599    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:30:55.900599    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:30:55.900599    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:30:55.900599    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:30:55.902986    9928 start.go:365] acquiring machines lock for multinode-484200: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:30:55.902986    9928 start.go:369] acquired machines lock for "multinode-484200" in 0s
	I0122 12:30:55.902986    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:30:55.903987    9928 fix.go:54] fixHost starting: 
	I0122 12:30:55.903987    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:30:58.640896    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:30:58.641066    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:30:58.641066    9928 fix.go:102] recreateIfNeeded on multinode-484200: state=Stopped err=<nil>
	W0122 12:30:58.641066    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:30:58.642536    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200" ...
	I0122 12:30:58.643346    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200
	I0122 12:31:01.658248    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:01.658248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:01.658248    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:31:01.658348    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:03.933413    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:03.933473    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:03.933627    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:06.494397    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:06.494641    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:07.509457    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:09.854565    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:09.854664    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:09.854664    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:12.397605    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:12.397781    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:13.402359    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:15.569055    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:15.569088    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:15.569190    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:18.081049    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:18.081286    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:19.088792    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:21.294223    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:21.294257    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:21.294320    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:23.835479    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:31:23.835757    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:24.837048    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:27.081003    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:27.081095    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:27.081095    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:29.660428    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:29.660726    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:29.663822    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:31.805251    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:34.305456    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:34.305456    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:34.305551    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:31:34.307671    9928 machine.go:88] provisioning docker machine ...
	I0122 12:31:34.308309    9928 buildroot.go:166] provisioning hostname "multinode-484200"
	I0122 12:31:34.308457    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:36.408442    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:36.408654    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:36.408654    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:38.951759    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:38.951759    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:38.957547    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:31:38.958217    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:31:38.958217    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200 && echo "multinode-484200" | sudo tee /etc/hostname
	I0122 12:31:39.120498    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200
	
	I0122 12:31:39.120574    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:41.244792    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:41.245004    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:41.245004    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:43.816084    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:43.816084    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:43.822313    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:31:43.823097    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:31:43.823140    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:31:43.977757    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:31:43.977757    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:31:43.977886    9928 buildroot.go:174] setting up certificates
	I0122 12:31:43.977937    9928 provision.go:83] configureAuth start
	I0122 12:31:43.978023    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:46.094490    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:46.094490    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:46.094594    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:48.593389    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:48.593493    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:48.593493    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:50.686618    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:50.686820    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:50.686900    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:53.242868    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:53.242982    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:53.243131    9928 provision.go:138] copyHostCerts
	I0122 12:31:53.243162    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:31:53.243162    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:31:53.243162    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:31:53.243883    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:31:53.245059    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:31:53.245213    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:31:53.245213    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:31:53.245734    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:31:53.246657    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:31:53.247232    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:31:53.247232    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:31:53.247232    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:31:53.248717    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200 san=[172.23.191.211 172.23.191.211 localhost 127.0.0.1 minikube multinode-484200]
	I0122 12:31:53.745769    9928 provision.go:172] copyRemoteCerts
	I0122 12:31:53.758638    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:31:53.758638    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:55.937226    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:31:58.487083    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:31:58.487248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:31:58.487854    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:31:58.595693    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8369812s)
	I0122 12:31:58.595743    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:31:58.595831    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:31:58.635752    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:31:58.636266    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0122 12:31:58.671363    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:31:58.671701    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:31:58.710168    9928 provision.go:86] duration metric: configureAuth took 14.7321668s
	I0122 12:31:58.710272    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:31:58.711251    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:31:58.711422    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:00.827121    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:00.827121    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:00.827277    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:03.335985    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:03.335985    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:03.346408    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:03.347254    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:03.347254    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:32:03.486169    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:32:03.486255    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:32:03.486439    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:32:03.486530    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:05.582752    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:05.582752    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:05.582840    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:08.078475    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:08.078475    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:08.084683    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:08.085261    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:08.085382    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:32:08.246410    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:32:08.246510    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:10.367238    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:10.367238    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:10.368109    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:12.914682    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:12.914778    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:12.920203    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:12.921107    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:12.921107    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:32:14.205600    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:32:14.206171    9928 machine.go:91] provisioned docker machine in 39.8976494s
	I0122 12:32:14.206223    9928 start.go:300] post-start starting for "multinode-484200" (driver="hyperv")
	I0122 12:32:14.206264    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:32:14.219611    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:32:14.219611    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:16.348251    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:16.348335    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:16.348335    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:18.828959    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:18.828959    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:18.829487    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:18.940429    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7207028s)
	I0122 12:32:18.953716    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:32:18.961304    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:32:18.961682    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:32:18.961682    9928 command_runner.go:130] > ID=buildroot
	I0122 12:32:18.961682    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:32:18.961682    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:32:18.961682    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:32:18.961682    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:32:18.961682    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:32:18.963618    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:32:18.963704    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:32:18.976102    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:32:18.991306    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:32:19.034395    9928 start.go:303] post-start completed in 4.8281502s
	I0122 12:32:19.034480    9928 fix.go:56] fixHost completed within 1m23.1301272s
	I0122 12:32:19.034546    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:21.130828    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:23.724428    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:23.724741    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:23.729801    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:23.730488    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:23.730488    9928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 12:32:23.873737    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705926743.873947323
	
	I0122 12:32:23.873816    9928 fix.go:206] guest clock: 1705926743.873947323
	I0122 12:32:23.873816    9928 fix.go:219] Guest: 2024-01-22 12:32:23.873947323 +0000 UTC Remote: 2024-01-22 12:32:19.0344806 +0000 UTC m=+88.847123001 (delta=4.839466723s)
	I0122 12:32:23.873816    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:26.033062    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:28.607995    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:28.608099    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:28.612977    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:32:28.613559    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.191.211 22 <nil> <nil>}
	I0122 12:32:28.613559    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705926743
	I0122 12:32:28.761089    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:32:23 UTC 2024
	
	I0122 12:32:28.761089    9928 fix.go:226] clock set: Mon Jan 22 12:32:23 UTC 2024
	 (err=<nil>)
	I0122 12:32:28.761089    9928 start.go:83] releasing machines lock for "multinode-484200", held for 1m32.8576944s
	I0122 12:32:28.761723    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:30.891913    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:30.891913    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:30.892147    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:33.455695    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:33.455751    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:33.459949    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:32:33.460072    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:33.471279    9928 ssh_runner.go:195] Run: cat /version.json
	I0122 12:32:33.472283    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:35.670517    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:35.718451    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:32:35.718451    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:35.718589    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:32:38.274392    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:38.274918    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:38.275596    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:38.337377    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:32:38.337377    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:32:38.337595    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:32:38.373718    9928 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0122 12:32:38.373797    9928 ssh_runner.go:235] Completed: cat /version.json: (4.9024179s)
	I0122 12:32:38.386644    9928 ssh_runner.go:195] Run: systemctl --version
	I0122 12:32:38.496750    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:32:38.496850    9928 command_runner.go:130] > systemd 247 (247)
	I0122 12:32:38.496850    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0367792s)
	I0122 12:32:38.496850    9928 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0122 12:32:38.508640    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:32:38.516348    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0122 12:32:38.516708    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:32:38.529327    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:32:38.551405    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:32:38.551405    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:32:38.551405    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:32:38.551405    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:32:38.579202    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:32:38.596414    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:32:38.624477    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:32:38.638904    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:32:38.652404    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:32:38.682796    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:32:38.711915    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:32:38.740222    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:32:38.769950    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:32:38.800527    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:32:38.831862    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:32:38.845724    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:32:38.861368    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:32:38.889186    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:39.060498    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:32:39.085356    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:32:39.099850    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:32:39.118111    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:32:39.118111    9928 command_runner.go:130] > [Unit]
	I0122 12:32:39.118111    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:32:39.118111    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:32:39.118111    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:32:39.118111    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:32:39.118111    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:32:39.118111    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:32:39.118111    9928 command_runner.go:130] > [Service]
	I0122 12:32:39.118111    9928 command_runner.go:130] > Type=notify
	I0122 12:32:39.118111    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:32:39.118111    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:32:39.118111    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:32:39.118111    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:32:39.118111    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:32:39.118111    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:32:39.118111    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:32:39.118111    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecStart=
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:32:39.118111    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:32:39.118111    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:32:39.118111    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:32:39.118703    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:32:39.118703    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:32:39.118703    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:32:39.118703    9928 command_runner.go:130] > Delegate=yes
	I0122 12:32:39.118703    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:32:39.118703    9928 command_runner.go:130] > KillMode=process
	I0122 12:32:39.118703    9928 command_runner.go:130] > [Install]
	I0122 12:32:39.118795    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:32:39.131859    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:32:39.166801    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:32:39.203144    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:32:39.241528    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:32:39.273996    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:32:39.331664    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:32:39.352019    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:32:39.380712    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:32:39.393057    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:32:39.397991    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:32:39.409994    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:32:39.426535    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:32:39.467361    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:32:39.646736    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:32:39.810936    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:32:39.810936    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:32:39.854316    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:40.032260    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:32:41.607672    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.575406s)
	I0122 12:32:41.622485    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:32:41.656582    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:32:41.689867    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:32:41.876944    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:32:42.041710    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:42.245311    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:32:42.286146    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:32:42.319102    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:32:42.493569    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:32:42.596438    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:32:42.610266    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:32:42.617888    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:32:42.617888    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:32:42.617888    9928 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I0122 12:32:42.617888    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:32:42.617888    9928 command_runner.go:130] > Access: 2024-01-22 12:32:42.513179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] > Modify: 2024-01-22 12:32:42.513179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] > Change: 2024-01-22 12:32:42.517179008 +0000
	I0122 12:32:42.617888    9928 command_runner.go:130] >  Birth: -
	I0122 12:32:42.617888    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:32:42.631135    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:32:42.636541    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:32:42.649195    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:32:42.720075    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:32:42.720210    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:32:42.720210    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:32:42.730390    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:32:42.761283    9928 command_runner.go:130] > 24.0.7
	I0122 12:32:42.771189    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:32:42.801106    9928 command_runner.go:130] > 24.0.7
	I0122 12:32:42.804650    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:32:42.804861    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:32:42.810312    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:32:42.812891    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:32:42.812891    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:32:42.824480    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:32:42.829528    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:32:42.848040    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:32:42.858397    9928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0122 12:32:42.884305    9928 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0122 12:32:42.884305    9928 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0122 12:32:42.884305    9928 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:32:42.884305    9928 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0122 12:32:42.884305    9928 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0122 12:32:42.884305    9928 docker.go:615] Images already preloaded, skipping extraction
	I0122 12:32:42.894358    9928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0122 12:32:42.919184    9928 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0122 12:32:42.920076    9928 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0122 12:32:42.920076    9928 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0122 12:32:42.920123    9928 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:32:42.920123    9928 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0122 12:32:42.920153    9928 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0122 12:32:42.920153    9928 cache_images.go:84] Images are preloaded, skipping loading
	I0122 12:32:42.930240    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:32:42.966208    9928 command_runner.go:130] > cgroupfs
	I0122 12:32:42.966208    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:32:42.966857    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:32:42.966857    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:32:42.966962    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.191.211 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.191.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:32:42.967273    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.191.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200"
	  kubeletExtraArgs:
	    node-ip: 172.23.191.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:32:42.967484    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.191.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:32:42.979510    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:32:42.994300    9928 command_runner.go:130] > kubeadm
	I0122 12:32:42.995139    9928 command_runner.go:130] > kubectl
	I0122 12:32:42.995139    9928 command_runner.go:130] > kubelet
	I0122 12:32:42.995139    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:32:43.008601    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 12:32:43.022600    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0122 12:32:43.050493    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:32:43.081359    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0122 12:32:43.126366    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:32:43.131127    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:32:43.147630    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.191.211
	I0122 12:32:43.147711    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.147878    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:32:43.148843    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:32:43.149575    9928 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\client.key
	I0122 12:32:43.149575    9928 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578
	I0122 12:32:43.149575    9928 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 with IP's: [172.23.191.211 10.96.0.1 127.0.0.1 10.0.0.1]
	I0122 12:32:43.287292    9928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 ...
	I0122 12:32:43.287292    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578: {Name:mk2e4e9cce5f09968f0188a3b017f4358d8ae6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.288544    9928 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578 ...
	I0122 12:32:43.288544    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578: {Name:mk5521ea78846a551397996cd3f44504deac2f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:43.289618    9928 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt.e0ad0578 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt
	I0122 12:32:43.297690    9928 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key.e0ad0578 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key
	I0122 12:32:43.300399    9928 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0122 12:32:43.300399    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:32:43.301382    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:32:43.302260    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:32:43.302363    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:32:43.302972    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:32:43.303108    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:32:43.303238    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:32:43.303980    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:32:43.304439    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.305234    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0122 12:32:43.344883    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0122 12:32:43.382052    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 12:32:43.418619    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 12:32:43.463223    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:32:43.504577    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:32:43.547257    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:32:43.588533    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:32:43.629240    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:32:43.669867    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:32:43.710172    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:32:43.749572    9928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 12:32:43.793197    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:32:43.800870    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:32:43.813518    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:32:43.841865    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.848221    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.848553    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.861268    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:32:43.869982    9928 command_runner.go:130] > b5213941
	I0122 12:32:43.884405    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:32:43.910888    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:32:43.937815    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.943545    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.943610    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.956220    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:32:43.963283    9928 command_runner.go:130] > 51391683
	I0122 12:32:43.976321    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:32:44.007160    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:32:44.036400    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.042273    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.042369    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.053020    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:32:44.062451    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:32:44.074727    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:32:44.102721    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:32:44.109755    9928 command_runner.go:130] > ca.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > ca.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > healthcheck-client.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > healthcheck-client.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > peer.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > peer.key
	I0122 12:32:44.109755    9928 command_runner.go:130] > server.crt
	I0122 12:32:44.109755    9928 command_runner.go:130] > server.key
	I0122 12:32:44.122622    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 12:32:44.130188    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.142862    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 12:32:44.152249    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.166098    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 12:32:44.174209    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.187619    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 12:32:44.196728    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.212163    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 12:32:44.221229    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.234784    9928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 12:32:44.243601    9928 command_runner.go:130] > Certificate will not expire
	I0122 12:32:44.243879    9928 kubeadm.go:404] StartCluster: {Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.185.74 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:32:44.254088    9928 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 12:32:44.290868    9928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0122 12:32:44.309792    9928 command_runner.go:130] > /var/lib/minikube/etcd:
	I0122 12:32:44.309792    9928 command_runner.go:130] > member
	I0122 12:32:44.310055    9928 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0122 12:32:44.310174    9928 kubeadm.go:636] restartCluster start
	I0122 12:32:44.328830    9928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 12:32:44.344376    9928 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 12:32:44.345388    9928 kubeconfig.go:135] verify returned: extract IP: "multinode-484200" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:44.345769    9928 kubeconfig.go:146] "multinode-484200" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0122 12:32:44.346367    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:44.358248    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:44.359298    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:32:44.361048    9928 cert_rotation.go:137] Starting client certificate rotation controller
	I0122 12:32:44.375418    9928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 12:32:44.393044    9928 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.393123    9928 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0122 12:32:44.393123    9928 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0122 12:32:44.393172    9928 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0122 12:32:44.393172    9928 command_runner.go:130] >  kind: InitConfiguration
	I0122 12:32:44.393172    9928 command_runner.go:130] >  localAPIEndpoint:
	I0122 12:32:44.393172    9928 command_runner.go:130] > -  advertiseAddress: 172.23.177.163
	I0122 12:32:44.393172    9928 command_runner.go:130] > +  advertiseAddress: 172.23.191.211
	I0122 12:32:44.393172    9928 command_runner.go:130] >    bindPort: 8443
	I0122 12:32:44.393172    9928 command_runner.go:130] >  bootstrapTokens:
	I0122 12:32:44.393329    9928 command_runner.go:130] >    - groups:
	I0122 12:32:44.393329    9928 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0122 12:32:44.393329    9928 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0122 12:32:44.393329    9928 command_runner.go:130] >    name: "multinode-484200"
	I0122 12:32:44.393329    9928 command_runner.go:130] >    kubeletExtraArgs:
	I0122 12:32:44.393410    9928 command_runner.go:130] > -    node-ip: 172.23.177.163
	I0122 12:32:44.393410    9928 command_runner.go:130] > +    node-ip: 172.23.191.211
	I0122 12:32:44.393410    9928 command_runner.go:130] >    taints: []
	I0122 12:32:44.393410    9928 command_runner.go:130] >  ---
	I0122 12:32:44.393410    9928 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0122 12:32:44.393471    9928 command_runner.go:130] >  kind: ClusterConfiguration
	I0122 12:32:44.393496    9928 command_runner.go:130] >  apiServer:
	I0122 12:32:44.393496    9928 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	I0122 12:32:44.393525    9928 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	I0122 12:32:44.393525    9928 command_runner.go:130] >    extraArgs:
	I0122 12:32:44.393525    9928 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0122 12:32:44.393525    9928 command_runner.go:130] >  controllerManager:
	I0122 12:32:44.393525    9928 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.177.163
	+  advertiseAddress: 172.23.191.211
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-484200"
	   kubeletExtraArgs:
	-    node-ip: 172.23.177.163
	+    node-ip: 172.23.191.211
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.177.163"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0122 12:32:44.393525    9928 kubeadm.go:1135] stopping kube-system containers ...
	I0122 12:32:44.403231    9928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0122 12:32:44.437239    9928 command_runner.go:130] > f56e8f9de45e
	I0122 12:32:44.437568    9928 command_runner.go:130] > f4ffc78cf0f3
	I0122 12:32:44.437568    9928 command_runner.go:130] > 19c7fcee88a7
	I0122 12:32:44.437568    9928 command_runner.go:130] > e46fc8eaac50
	I0122 12:32:44.437568    9928 command_runner.go:130] > e55da8df34e5
	I0122 12:32:44.437617    9928 command_runner.go:130] > 3f4ebe9f43ea
	I0122 12:32:44.437617    9928 command_runner.go:130] > c1a37f7be8af
	I0122 12:32:44.437617    9928 command_runner.go:130] > e27d0a036e72
	I0122 12:32:44.437617    9928 command_runner.go:130] > eae98e092eb2
	I0122 12:32:44.437649    9928 command_runner.go:130] > 487e8e235693
	I0122 12:32:44.437649    9928 command_runner.go:130] > dca0a89d970d
	I0122 12:32:44.437649    9928 command_runner.go:130] > c38b8024525e
	I0122 12:32:44.437699    9928 command_runner.go:130] > 861530d246ac
	I0122 12:32:44.437699    9928 command_runner.go:130] > 24388c3440dd
	I0122 12:32:44.437699    9928 command_runner.go:130] > 37e2e82b5801
	I0122 12:32:44.437699    9928 command_runner.go:130] > a8d60a34b9ea
	I0122 12:32:44.438170    9928 docker.go:483] Stopping containers: [f56e8f9de45e f4ffc78cf0f3 19c7fcee88a7 e46fc8eaac50 e55da8df34e5 3f4ebe9f43ea c1a37f7be8af e27d0a036e72 eae98e092eb2 487e8e235693 dca0a89d970d c38b8024525e 861530d246ac 24388c3440dd 37e2e82b5801 a8d60a34b9ea]
	I0122 12:32:44.448738    9928 ssh_runner.go:195] Run: docker stop f56e8f9de45e f4ffc78cf0f3 19c7fcee88a7 e46fc8eaac50 e55da8df34e5 3f4ebe9f43ea c1a37f7be8af e27d0a036e72 eae98e092eb2 487e8e235693 dca0a89d970d c38b8024525e 861530d246ac 24388c3440dd 37e2e82b5801 a8d60a34b9ea
	I0122 12:32:44.478400    9928 command_runner.go:130] > f56e8f9de45e
	I0122 12:32:44.478470    9928 command_runner.go:130] > f4ffc78cf0f3
	I0122 12:32:44.478470    9928 command_runner.go:130] > 19c7fcee88a7
	I0122 12:32:44.478470    9928 command_runner.go:130] > e46fc8eaac50
	I0122 12:32:44.478470    9928 command_runner.go:130] > e55da8df34e5
	I0122 12:32:44.478470    9928 command_runner.go:130] > 3f4ebe9f43ea
	I0122 12:32:44.478470    9928 command_runner.go:130] > c1a37f7be8af
	I0122 12:32:44.478470    9928 command_runner.go:130] > e27d0a036e72
	I0122 12:32:44.478470    9928 command_runner.go:130] > eae98e092eb2
	I0122 12:32:44.478470    9928 command_runner.go:130] > 487e8e235693
	I0122 12:32:44.478470    9928 command_runner.go:130] > dca0a89d970d
	I0122 12:32:44.478470    9928 command_runner.go:130] > c38b8024525e
	I0122 12:32:44.478470    9928 command_runner.go:130] > 861530d246ac
	I0122 12:32:44.478470    9928 command_runner.go:130] > 24388c3440dd
	I0122 12:32:44.478470    9928 command_runner.go:130] > 37e2e82b5801
	I0122 12:32:44.478470    9928 command_runner.go:130] > a8d60a34b9ea
	I0122 12:32:44.491224    9928 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 12:32:44.527819    9928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 12:32:44.540318    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0122 12:32:44.540318    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0122 12:32:44.541200    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0122 12:32:44.541354    9928 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:32:44.541445    9928 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 12:32:44.553447    9928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.567507    9928 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0122 12:32:44.567507    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:44.929452    9928 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0122 12:32:44.929958    9928 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 12:32:44.930095    9928 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 12:32:44.930173    9928 command_runner.go:130] > [certs] Using the existing "sa" key
	I0122 12:32:44.930250    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.036426    9928 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 12:32:46.036426    9928 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 12:32:46.037061    9928 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 12:32:46.037061    9928 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 12:32:46.037121    9928 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 12:32:46.037121    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1068653s)
	I0122 12:32:46.037178    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.299211    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:32:46.299260    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:32:46.299260    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:32:46.299260    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.391717    9928 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 12:32:46.391773    9928 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 12:32:46.391891    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:46.475624    9928 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 12:32:46.475753    9928 api_server.go:52] waiting for apiserver process to appear ...
	I0122 12:32:46.494768    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:46.994064    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:47.504125    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:47.992281    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:48.501725    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:48.998788    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:32:49.061531    9928 command_runner.go:130] > 1830
	I0122 12:32:49.061531    9928 api_server.go:72] duration metric: took 2.585767s to wait for apiserver process to appear ...
	I0122 12:32:49.061531    9928 api_server.go:88] waiting for apiserver healthz status ...
	I0122 12:32:49.061531    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.390808    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 12:32:53.390808    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 12:32:53.391821    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.433345    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 12:32:53.433486    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 12:32:53.564896    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:53.578558    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:53.578558    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:54.072034    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:54.090023    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:54.090023    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:54.566845    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:54.582616    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0122 12:32:54.582616    9928 api_server.go:103] status: https://172.23.191.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0122 12:32:55.077016    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:32:55.088814    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 200:
	ok
	I0122 12:32:55.089597    9928 round_trippers.go:463] GET https://172.23.191.211:8443/version
	I0122 12:32:55.089597    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:55.089597    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:55.089597    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:55.102927    9928 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0122 12:32:55.102927    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:55.102927    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:55.102927    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Content-Length: 264
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:55 GMT
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Audit-Id: e09fdf80-8995-4398-8c74-4646a50e7309
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:55.102927    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:55.102927    9928 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0122 12:32:55.102927    9928 api_server.go:141] control plane version: v1.28.4
	I0122 12:32:55.102927    9928 api_server.go:131] duration metric: took 6.0413692s to wait for apiserver health ...
	I0122 12:32:55.102927    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:32:55.102927    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:32:55.103932    9928 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0122 12:32:55.116926    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:32:55.126574    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:32:55.126657    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:32:55.126657    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:32:55.126657    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:32:55.126657    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:32:55.126657    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:32:55.126657    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:32:55.126775    9928 command_runner.go:130] >  Birth: -
	I0122 12:32:55.126840    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:32:55.126887    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:32:55.179088    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:32:57.126888    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:32:57.126888    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:32:57.128421    9928 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.9493247s)
	I0122 12:32:57.128570    9928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 12:32:57.128623    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:32:57.128623    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.128623    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.128623    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.134192    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:57.134192    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Audit-Id: 5c2ddcc9-e068-4f9d-92a2-1fc3c736beb1
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.134192    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.134192    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.134192    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.134893    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.136389    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84179 chars]
	I0122 12:32:57.142709    9928 system_pods.go:59] 12 kube-system pods found
	I0122 12:32:57.143235    9928 system_pods.go:61] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 12:32:57.143287    9928 system_pods.go:61] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:32:57.143287    9928 system_pods.go:61] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 12:32:57.143287    9928 system_pods.go:61] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0122 12:32:57.143395    9928 system_pods.go:61] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:32:57.143395    9928 system_pods.go:61] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:32:57.143437    9928 system_pods.go:61] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 12:32:57.143464    9928 system_pods.go:61] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0122 12:32:57.143464    9928 system_pods.go:74] duration metric: took 14.8403ms to wait for pod list to return data ...
	I0122 12:32:57.143464    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:32:57.143529    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:32:57.143529    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.143529    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.143529    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.147141    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.147141    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.147141    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.147141    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.147141    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.148131    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.148131    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.148131    9928 round_trippers.go:580]     Audit-Id: fffe29c1-7781-4c33-ae6a-25fc47eb98b2
	I0122 12:32:57.148131    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14859 chars]
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:32:57.149189    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:32:57.149189    9928 node_conditions.go:105] duration metric: took 5.7256ms to run NodePressure ...
	I0122 12:32:57.149189    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 12:32:57.483493    9928 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0122 12:32:57.483563    9928 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0122 12:32:57.483719    9928 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0122 12:32:57.483800    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0122 12:32:57.483800    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.483800    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.484028    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.489230    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.489230    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Audit-Id: fc6d313b-8e86-403f-bc86-9c57430354a3
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.489230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.489230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.489230    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.489230    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1710","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0122 12:32:57.491141    9928 kubeadm.go:787] kubelet initialised
	I0122 12:32:57.491141    9928 kubeadm.go:788] duration metric: took 7.4224ms waiting for restarted kubelet to initialise ...
	I0122 12:32:57.491141    9928 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:32:57.491141    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:32:57.491141    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.491141    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.491141    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.499217    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:32:57.499217    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Audit-Id: c28dc28a-f0c8-48a5-ba74-4b0d894ff390
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.499217    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.499217    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.499303    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.499303    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.500878    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84179 chars]
	I0122 12:32:57.504211    9928 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.504799    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:32:57.504873    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.504873    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.504873    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.510364    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:57.510364    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.510364    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.510984    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.510984    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.510984    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.511078    9928 round_trippers.go:580]     Audit-Id: cf97ab7e-8a8f-4207-bb20-b81c433b289c
	I0122 12:32:57.511105    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.511105    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:32:57.511838    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.511882    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.511882    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.511882    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.514408    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.514408    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.515293    9928 round_trippers.go:580]     Audit-Id: 3ef2f7fd-07e0-4803-a40c-c5e4cd9ef057
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.515316    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.515316    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.515316    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.515673    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.518287    9928 pod_ready.go:97] node "multinode-484200" hosting pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.518287    9928 pod_ready.go:81] duration metric: took 14.0758ms waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.518287    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.518287    9928 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.518287    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:32:57.518287    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.518287    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.518287    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.520286    9928 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0122 12:32:57.520286    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.520286    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.521136    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.521136    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.521136    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.521136    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.521317    9928 round_trippers.go:580]     Audit-Id: 4b0e9c99-2fed-4725-b64f-99a0e1b74caf
	I0122 12:32:57.521493    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1710","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0122 12:32:57.521954    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.521954    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.521954    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.521954    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.524278    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.524278    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.524278    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.524278    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.524278    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.525286    9928 round_trippers.go:580]     Audit-Id: fc6a76f0-4529-4301-99d7-a58f72c928ae
	I0122 12:32:57.525286    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.525286    9928 pod_ready.go:97] node "multinode-484200" hosting pod "etcd-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.525286    9928 pod_ready.go:81] duration metric: took 6.9997ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.525286    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "etcd-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.525286    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.525286    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:32:57.525286    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.525286    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.525286    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.529278    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.530277    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Audit-Id: 610f64d3-0767-429a-8a16-b47ae46a97c1
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.530322    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.530322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.530322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.530322    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1711","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0122 12:32:57.531274    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.531274    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.531341    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.531341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.533916    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:32:57.533916    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.533916    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.533916    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Audit-Id: ecdca817-00e0-44e6-93cd-d9d42d7397cc
	I0122 12:32:57.533916    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.534755    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.535268    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-apiserver-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.535268    9928 pod_ready.go:81] duration metric: took 9.9812ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.535268    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-apiserver-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.535268    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.535268    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:32:57.535268    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.535268    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.535268    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.539663    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.539719    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.539719    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.539719    9928 round_trippers.go:580]     Audit-Id: 6ca054d8-e630-479b-876d-f4174f6c7148
	I0122 12:32:57.539778    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.539791    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.539791    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.539791    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.540799    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1712","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0122 12:32:57.540908    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.540908    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.540908    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.540908    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.544123    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:57.544123    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.544123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Audit-Id: 6c4c38eb-d93d-4025-a0e3-e091d8f4c98b
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.544123    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.544123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.545110    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.545110    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-controller-manager-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.545110    9928 pod_ready.go:81] duration metric: took 9.8423ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.545110    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-controller-manager-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.545110    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:57.743428    9928 request.go:629] Waited for 198.0538ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:32:57.743428    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:32:57.743428    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.743428    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.743428    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.748015    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:57.748015    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.748015    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.748015    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.748477    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.748477    9928 round_trippers.go:580]     Audit-Id: fd54bf24-3f89-4fe1-993b-153c00cf6dc6
	I0122 12:32:57.748519    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.748519    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.748725    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1714","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0122 12:32:57.932511    9928 request.go:629] Waited for 183.0372ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.932783    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:57.932783    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:57.932783    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:57.932783    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:57.939759    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:32:57.939759    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:57.939759    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:57.939759    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:57 GMT
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Audit-Id: 6fc61b84-923d-43be-b75c-40b80af3799d
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:57.939759    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:57.941239    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:57.941239    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-proxy-m94rg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.941239    9928 pod_ready.go:81] duration metric: took 396.1268ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:57.941771    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-proxy-m94rg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:57.941771    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.136514    9928 request.go:629] Waited for 194.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:32:58.136834    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:32:58.136834    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.136834    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.136922    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.140572    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:58.140572    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.140572    9928 round_trippers.go:580]     Audit-Id: aa28fa26-7e02-4987-afe5-c415a1915a5f
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.141317    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.141317    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.141317    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.141956    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1585","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:32:58.340398    9928 request.go:629] Waited for 197.351ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:32:58.340789    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:32:58.340821    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.340821    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.340821    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.346086    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:58.346086    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Audit-Id: 71ea3118-aeab-4291-bf79-fd62caf1a5f3
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.346292    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.346370    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.346370    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.346691    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1605","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3637 chars]
	I0122 12:32:58.347384    9928 pod_ready.go:92] pod "kube-proxy-mxd9x" in "kube-system" namespace has status "Ready":"True"
	I0122 12:32:58.347384    9928 pod_ready.go:81] duration metric: took 405.6106ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.347470    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.530451    9928 request.go:629] Waited for 182.6505ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:32:58.530771    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:32:58.530845    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.530845    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.530926    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.534623    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:32:58.534623    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.535438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Audit-Id: 3d799f44-036a-48ee-9840-fd496952751e
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.535438    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.535520    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.535839    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"612","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0122 12:32:58.736348    9928 request.go:629] Waited for 199.5591ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:32:58.736518    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:32:58.736518    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.736518    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.736518    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.741034    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:58.741034    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.742072    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.742102    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.742102    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Audit-Id: 97b09077-5884-4312-a170-c60143a7df89
	I0122 12:32:58.742102    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.742276    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"1577","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0122 12:32:58.743128    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:32:58.743128    9928 pod_ready.go:81] duration metric: took 395.6566ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.743128    9928 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:32:58.940955    9928 request.go:629] Waited for 197.4011ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:32:58.941143    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:32:58.941143    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:58.941143    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:58.941143    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:58.948247    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:32:58.948332    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:58.948332    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:58.948393    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:58.948442    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:58 GMT
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Audit-Id: bf6964d6-0e70-4a94-9309-85088a57d80c
	I0122 12:32:58.948442    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:58.948442    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1713","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0122 12:32:59.130879    9928 request.go:629] Waited for 181.0422ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.131379    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.131379    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.131379    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.131498    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.136635    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:32:59.136635    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.136777    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.136777    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.136777    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.136834    9928 round_trippers.go:580]     Audit-Id: 5b39e856-5221-45ae-b22c-becbfed55ee4
	I0122 12:32:59.137067    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:59.137187    9928 pod_ready.go:97] node "multinode-484200" hosting pod "kube-scheduler-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:59.137187    9928 pod_ready.go:81] duration metric: took 394.0571ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	E0122 12:32:59.137187    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200" hosting pod "kube-scheduler-multinode-484200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200" has status "Ready":"False"
	I0122 12:32:59.137187    9928 pod_ready.go:38] duration metric: took 1.6460383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:32:59.137711    9928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 12:32:59.155663    9928 command_runner.go:130] > -16
	I0122 12:32:59.155865    9928 ops.go:34] apiserver oom_adj: -16
	I0122 12:32:59.155865    9928 kubeadm.go:640] restartCluster took 14.8455325s
	I0122 12:32:59.155903    9928 kubeadm.go:406] StartCluster complete in 14.9119591s
	I0122 12:32:59.155903    9928 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:59.156247    9928 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:59.157626    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:32:59.158468    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 12:32:59.159111    9928 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0122 12:32:59.160075    9928 out.go:177] * Enabled addons: 
	I0122 12:32:59.159974    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:32:59.160876    9928 addons.go:505] enable addons completed in 1.7649ms: enabled=[]
	I0122 12:32:59.171873    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:32:59.172927    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:32:59.174871    9928 cert_rotation.go:137] Starting client certificate rotation controller
	I0122 12:32:59.174871    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:32:59.174871    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.174871    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.174871    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.188321    9928 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0122 12:32:59.188960    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Audit-Id: 72d0b8e4-ee81-4038-b8cf-b109864038a5
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.188960    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.188960    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.189071    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.189092    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1735","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:32:59.189212    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:32:59.189212    9928 start.go:223] Will wait 6m0s for node &{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:32:59.190316    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:32:59.203797    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:32:59.396292    9928 command_runner.go:130] > apiVersion: v1
	I0122 12:32:59.396379    9928 command_runner.go:130] > data:
	I0122 12:32:59.396379    9928 command_runner.go:130] >   Corefile: |
	I0122 12:32:59.396379    9928 command_runner.go:130] >     .:53 {
	I0122 12:32:59.396379    9928 command_runner.go:130] >         log
	I0122 12:32:59.396379    9928 command_runner.go:130] >         errors
	I0122 12:32:59.396379    9928 command_runner.go:130] >         health {
	I0122 12:32:59.396379    9928 command_runner.go:130] >            lameduck 5s
	I0122 12:32:59.396458    9928 command_runner.go:130] >         }
	I0122 12:32:59.396458    9928 command_runner.go:130] >         ready
	I0122 12:32:59.396490    9928 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0122 12:32:59.396490    9928 command_runner.go:130] >            pods insecure
	I0122 12:32:59.396490    9928 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0122 12:32:59.396521    9928 command_runner.go:130] >            ttl 30
	I0122 12:32:59.396521    9928 command_runner.go:130] >         }
	I0122 12:32:59.396521    9928 command_runner.go:130] >         prometheus :9153
	I0122 12:32:59.396589    9928 command_runner.go:130] >         hosts {
	I0122 12:32:59.396652    9928 command_runner.go:130] >            172.23.176.1 host.minikube.internal
	I0122 12:32:59.396676    9928 command_runner.go:130] >            fallthrough
	I0122 12:32:59.396676    9928 command_runner.go:130] >         }
	I0122 12:32:59.396676    9928 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0122 12:32:59.396705    9928 command_runner.go:130] >            max_concurrent 1000
	I0122 12:32:59.396705    9928 command_runner.go:130] >         }
	I0122 12:32:59.396705    9928 command_runner.go:130] >         cache 30
	I0122 12:32:59.396705    9928 command_runner.go:130] >         loop
	I0122 12:32:59.396705    9928 command_runner.go:130] >         reload
	I0122 12:32:59.396705    9928 command_runner.go:130] >         loadbalance
	I0122 12:32:59.396705    9928 command_runner.go:130] >     }
	I0122 12:32:59.396705    9928 command_runner.go:130] > kind: ConfigMap
	I0122 12:32:59.396705    9928 command_runner.go:130] > metadata:
	I0122 12:32:59.396705    9928 command_runner.go:130] >   creationTimestamp: "2024-01-22T12:10:46Z"
	I0122 12:32:59.396705    9928 command_runner.go:130] >   name: coredns
	I0122 12:32:59.396705    9928 command_runner.go:130] >   namespace: kube-system
	I0122 12:32:59.396705    9928 command_runner.go:130] >   resourceVersion: "398"
	I0122 12:32:59.396705    9928 command_runner.go:130] >   uid: b9768c33-b82e-4fdb-8e5b-45028bdc3da3
	I0122 12:32:59.396705    9928 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0122 12:32:59.396705    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200" to be "Ready" ...
	I0122 12:32:59.397231    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.397270    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.397270    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.397270    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.401848    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:59.401848    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.401848    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.401914    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.401963    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Audit-Id: 99565fdb-b028-44db-b606-07cdb721f1db
	I0122 12:32:59.402098    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.402279    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:32:59.907644    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:32:59.907738    9928 round_trippers.go:469] Request Headers:
	I0122 12:32:59.907826    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:32:59.907826    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:32:59.912398    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:32:59.912398    9928 round_trippers.go:577] Response Headers:
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Audit-Id: 803f8815-43ca-4487-b4ed-8a415a28bdad
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:32:59.912398    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:32:59.912732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:32:59.912732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:32:59.912732    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:32:59 GMT
	I0122 12:32:59.913360    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:00.412401    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:00.412491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:00.412491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:00.412491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:00.416866    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:00.416866    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:00.416866    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:00.416866    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:00 GMT
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Audit-Id: 1e3900df-764d-4eb8-887f-b7bfdf742c8c
	I0122 12:33:00.416866    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:00.418034    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:00.902871    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:00.902937    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:00.902937    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:00.902937    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:00.906712    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:00.907394    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:00.907394    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:00.907394    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:00 GMT
	I0122 12:33:00.907394    9928 round_trippers.go:580]     Audit-Id: fa2e302d-8084-4e71-a8b6-531798fc7a72
	I0122 12:33:00.907463    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:00.907463    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:00.907490    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:00.907634    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:01.405710    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:01.405806    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:01.405918    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:01.405918    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:01.409954    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:01.409954    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Audit-Id: 4333e119-5e20-4db3-ae9e-50fe1c3d26d1
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:01.410485    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:01.410485    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:01.410485    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:01 GMT
	I0122 12:33:01.410804    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:01.411393    9928 node_ready.go:58] node "multinode-484200" has status "Ready":"False"
	I0122 12:33:01.913144    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:01.913244    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:01.913244    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:01.913244    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:01.917756    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:01.917756    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:01.918416    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:01.918416    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:01.918416    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:01.918416    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:01.918483    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:01 GMT
	I0122 12:33:01.918483    9928 round_trippers.go:580]     Audit-Id: c92cd6bd-5659-48f4-9fdf-5c5d2569ace2
	I0122 12:33:01.918980    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:02.399554    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.399640    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.399640    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.399640    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.403032    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.403032    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Audit-Id: 22912381-d38e-4427-92ae-aa251613612a
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.403032    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.403427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.403427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.403427    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.403714    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1662","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0122 12:33:02.903012    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.903086    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.903086    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.903086    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.906921    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.906921    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.907639    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.907639    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.907639    9928 round_trippers.go:580]     Audit-Id: 9bcd172d-05a2-4908-a57b-5b0d1ca09750
	I0122 12:33:02.907639    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:02.908431    9928 node_ready.go:49] node "multinode-484200" has status "Ready":"True"
	I0122 12:33:02.908431    9928 node_ready.go:38] duration metric: took 3.51171s waiting for node "multinode-484200" to be "Ready" ...
	I0122 12:33:02.908514    9928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:33:02.908612    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:02.908714    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.908714    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.908714    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.913805    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:02.913805    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.913805    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.913805    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.914046    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.914046    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.914046    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.914046    9928 round_trippers.go:580]     Audit-Id: 73aca338-8ba7-4bf3-bd8a-f5e6d4d96c87
	I0122 12:33:02.917329    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1751"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83589 chars]
	I0122 12:33:02.921475    9928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:02.921703    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:02.921727    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.921727    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.921793    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.925606    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.926198    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.926279    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.926279    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.926279    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.926355    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.926383    9928 round_trippers.go:580]     Audit-Id: 97d58a5d-023f-4689-9e2b-606ddef97214
	I0122 12:33:02.926383    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.926622    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:02.927242    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:02.927303    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:02.927303    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:02.927373    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:02.930508    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:02.930604    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:02.930604    9928 round_trippers.go:580]     Audit-Id: 6d3336ab-84fb-40f2-a4ed-70d80d715733
	I0122 12:33:02.930604    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:02.930670    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:02.930670    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:02.930670    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:02.930670    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:02 GMT
	I0122 12:33:02.930670    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:03.422552    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:03.422552    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.422552    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.422552    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.428157    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:03.428157    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Audit-Id: 45b79bb8-a77d-4642-a82f-33059d5c0f74
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.428157    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.428157    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.428407    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.428407    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.428605    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:03.429642    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:03.429642    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.429642    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.429642    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.436167    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:03.436167    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Audit-Id: 7c33f8b8-7023-4e20-a6d5-9ae300e9c751
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.436167    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.436167    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.436167    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.436167    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:03.922827    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:03.922903    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.922903    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.922903    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.927782    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:03.928208    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.928208    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.928208    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.928208    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.928208    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.928422    9928 round_trippers.go:580]     Audit-Id: c1ee8e10-c16b-41f0-8e3d-8055f2eb2cbd
	I0122 12:33:03.928465    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.928702    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:03.929590    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:03.929590    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:03.929663    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:03.929663    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:03.933056    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:03.933056    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:03.933056    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:03.933056    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:03.933056    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:03 GMT
	I0122 12:33:03.933056    9928 round_trippers.go:580]     Audit-Id: 05e713e3-3b1e-4158-9ae4-ed149a1bab9d
	I0122 12:33:03.933452    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:03.933452    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:03.933741    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.437465    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:04.437544    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.437544    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.437606    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.441313    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.442158    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.442158    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Audit-Id: 06fb88c6-9c80-4de6-bdbb-86a825f637b6
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.442158    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.442158    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.442509    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:04.443241    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:04.443241    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.443241    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.443241    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.445603    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:04.445603    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Audit-Id: 395cc31b-be74-4f69-9162-920ad6543ee4
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.446549    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.446549    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.446549    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.446611    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.446611    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.936235    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:04.936235    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.936235    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.936235    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.939713    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.939713    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.940712    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.940712    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.940748    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.940748    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.940748    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.940748    9928 round_trippers.go:580]     Audit-Id: f26c1e48-d816-490a-a692-cfeb3e2f8059
	I0122 12:33:04.940852    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:04.941095    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:04.941687    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:04.941687    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:04.941687    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:04.944798    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:04.944798    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Audit-Id: 209edfa7-b032-4b39-b098-4e3077658a9e
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:04.944798    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:04.944798    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:04.944798    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:04 GMT
	I0122 12:33:04.944798    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:04.945762    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:05.424190    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:05.424190    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.424190    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.424190    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.428792    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:05.429188    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Audit-Id: 8716a6b4-f07a-49cc-8202-99a9ed9e7319
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.429188    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.429188    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.429188    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.429188    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:05.430008    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:05.430008    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.430008    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.430008    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.433679    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:05.433679    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.433679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.433679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.433679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.433679    9928 round_trippers.go:580]     Audit-Id: 63a86ddc-b654-4126-bf88-38ad91076e1d
	I0122 12:33:05.434276    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.434276    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.434589    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:05.926131    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:05.926222    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.926222    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.926315    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.928914    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:05.928914    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.929338    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.929338    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.929338    9928 round_trippers.go:580]     Audit-Id: 2d58ba30-fc82-407f-92c8-5b0509ac5720
	I0122 12:33:05.929400    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:05.930272    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:05.930350    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:05.930350    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:05.930350    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:05.936114    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:05.936114    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Audit-Id: 7da4d310-fb3c-4a51-9239-56df5133e0d9
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:05.936594    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:05.936594    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:05.936594    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:05.936665    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:05 GMT
	I0122 12:33:05.936925    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:06.428134    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:06.428244    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.428315    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.428315    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.444548    9928 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0122 12:33:06.444614    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.444614    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Audit-Id: 5618bbfa-589b-4829-84d4-9bcc30e1e1e5
	I0122 12:33:06.444614    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.444684    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.444684    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.444917    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:06.445710    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:06.445791    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.445791    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.445791    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.450322    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:06.450322    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Audit-Id: f8cd9dfa-6812-44c7-9431-6736f208d64c
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.450322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.450322    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.450322    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.450322    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:06.930459    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:06.930459    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.930459    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.930459    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.935069    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:06.935286    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.935286    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.935286    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.935286    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.935371    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.935371    9928 round_trippers.go:580]     Audit-Id: 6612e434-549a-4cfe-ae2e-045143970d51
	I0122 12:33:06.935437    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.935647    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:06.936340    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:06.936340    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:06.936510    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:06.936510    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:06.939717    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:06.939717    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:06.939717    9928 round_trippers.go:580]     Audit-Id: 96e096ff-4619-46e7-af7d-afeddfe8e86f
	I0122 12:33:06.939717    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:06.940599    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:06.940599    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:06.940599    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:06.940599    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:06 GMT
	I0122 12:33:06.940937    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:07.433548    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:07.433604    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.433604    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.433604    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.437039    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.437039    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.437039    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.437593    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.437593    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.437593    9928 round_trippers.go:580]     Audit-Id: f012c16f-4c03-444a-8eb3-24a551749c50
	I0122 12:33:07.437922    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:07.438661    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:07.438661    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.438661    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.438661    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.442000    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.442000    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.442000    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.442000    9928 round_trippers.go:580]     Audit-Id: d83e9538-b67a-42f1-9093-d6416eeed3df
	I0122 12:33:07.442536    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.442536    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.442536    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.442536    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.443187    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:07.443927    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:07.923519    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:07.923519    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.923519    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.923519    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.928166    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.928209    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Audit-Id: f5c1ffc6-b693-43d8-b72a-1e6bc6abe580
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.928209    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.928209    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.928209    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.928209    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:07.929094    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:07.929094    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:07.929094    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:07.929094    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:07.932679    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:07.932679    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:07.932679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:07 GMT
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Audit-Id: 7e75b55b-c186-4e0a-9fea-7cd42911fce1
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:07.933095    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:07.933095    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:07.933095    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:07.933410    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:08.435102    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:08.435177    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.435177    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.435249    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.439172    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:08.440155    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.440155    9928 round_trippers.go:580]     Audit-Id: cb24955d-1368-47aa-a4f2-0dee54185a98
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.440206    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.440206    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.440206    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.440738    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:08.441205    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:08.441205    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.441205    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.441847    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.448790    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:08.448790    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.448790    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.448790    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Audit-Id: 30c15e5a-bbff-4af8-9a83-b50a5dd2bf66
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.448790    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.449795    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:08.931904    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:08.931904    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.931904    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.931904    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.936629    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:08.936629    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Audit-Id: ca585ee3-47a8-4660-9531-84faa4f46e85
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.936629    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.936629    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.936629    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.937278    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.937733    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:08.939077    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:08.939137    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:08.939137    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:08.939137    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:08.949182    9928 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0122 12:33:08.949182    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:08.949182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:08.949182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:08 GMT
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Audit-Id: f10a2950-0baf-4b87-b325-1ac8617f7a20
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:08.949182    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:08.949182    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:09.435249    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:09.435249    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.435249    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.435249    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.439242    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.439242    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.439242    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.439242    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.439853    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Audit-Id: 3618903b-4992-42fc-b820-03ceebeebcd9
	I0122 12:33:09.439853    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.440059    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:09.440924    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:09.441012    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.441012    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.441012    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.444551    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.445013    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Audit-Id: 4f323315-a07a-4e05-8ae7-ff8ff8967ccb
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.445013    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.445013    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.445013    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.445130    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.445444    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:09.446038    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:09.933255    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:09.933322    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.933322    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.933322    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.937528    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:09.937957    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.937957    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.937957    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Audit-Id: 2753e85b-fc8f-44b9-ac4f-62491dc51aa4
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.937957    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.938371    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:09.939182    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:09.939182    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:09.939255    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:09.939255    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:09.942533    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:09.942772    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:09.942772    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:09.942772    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:09 GMT
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Audit-Id: 499bfef9-f01b-456e-b62f-d0ab9ccb72bc
	I0122 12:33:09.942772    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:09.943384    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:10.426064    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:10.426064    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.426064    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.426064    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.429651    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:10.430079    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.430079    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.430079    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.430079    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.430079    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.430192    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.430192    9928 round_trippers.go:580]     Audit-Id: 27f4d291-72f5-430d-9115-a84c04f855da
	I0122 12:33:10.430350    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:10.430865    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:10.430865    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.430865    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.430865    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.435575    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:10.435575    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Audit-Id: 33a1915b-3b87-493c-b077-082fa2493fc9
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.436110    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.436182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.436182    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.436215    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:10.932525    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:10.932525    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.932525    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.932525    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.935518    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:10.936508    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.936508    9928 round_trippers.go:580]     Audit-Id: 339235a4-5abc-430c-8134-a185abc5d40d
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.936576    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.936576    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.936576    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.936825    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:10.937491    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:10.937491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:10.937491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:10.937491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:10.943517    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:10.943517    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Audit-Id: d0c0a248-8410-49cc-b80a-983071f61c89
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:10.944450    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:10.944450    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:10.944450    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:10 GMT
	I0122 12:33:10.944450    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.427349    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:11.427349    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.427349    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.427349    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.431903    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:11.432426    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Audit-Id: 37b4c276-7baa-4f9c-b51a-4d19228316ad
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.432426    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.432426    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.432487    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.432487    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.432524    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:11.433690    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:11.433760    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.433760    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.433760    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.436571    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:11.436571    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Audit-Id: 963f3c30-a697-4f0a-a788-d89961ddf189
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.437133    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.437133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.437133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.437597    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.927691    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:11.927781    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.927781    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.927781    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.931625    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:11.931971    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Audit-Id: 7e06334b-bf75-480c-a7e3-700922a3de89
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.931971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.931971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.931971    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.931971    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1707","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0122 12:33:11.932822    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:11.932822    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:11.932822    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:11.932822    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:11.935429    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:11.935429    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Audit-Id: ae77a19f-64f7-4b02-a75b-0dc8d32183f1
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:11.935429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:11.935429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:11.935429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:11.936362    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:11 GMT
	I0122 12:33:11.936436    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:11.937037    9928 pod_ready.go:102] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"False"
	I0122 12:33:12.430569    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:33:12.430569    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.430681    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.430681    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.439647    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:33:12.439647    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Audit-Id: 006c810b-2858-42bd-adad-ac744bd38c72
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.439647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.439647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.439647    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.441133    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0122 12:33:12.442114    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.442184    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.442184    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.442184    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.444952    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.445458    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Audit-Id: dabb8815-eb18-4304-9591-dc0d5b201a98
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.445458    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.445458    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.445458    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.446599    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.446801    9928 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.446801    9928 pod_ready.go:81] duration metric: took 9.525257s waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.446801    9928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.446801    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:33:12.446801    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.446801    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.446801    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.449823    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.449823    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.449823    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.449823    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.449823    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.450826    9928 round_trippers.go:580]     Audit-Id: 1c5720d3-9f85-4b64-8205-ffa2c95d65b9
	I0122 12:33:12.450826    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1771","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0122 12:33:12.450826    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.450826    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.450826    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.450826    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.454517    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.454517    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.454517    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.454517    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.455103    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Audit-Id: 4ea7dfb1-ba57-4f2a-bdfe-7c552e42f401
	I0122 12:33:12.455103    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.455407    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.455838    9928 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.455914    9928 pod_ready.go:81] duration metric: took 9.1137ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.455975    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.456058    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:33:12.456126    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.456126    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.456126    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.458906    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.458906    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.459089    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.459089    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.459089    9928 round_trippers.go:580]     Audit-Id: 235b0e25-e033-46bf-bfbc-47d2f9792aa3
	I0122 12:33:12.459089    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1757","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0122 12:33:12.459790    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.459859    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.459859    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.459859    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.463120    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.463120    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.463120    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.463237    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.463237    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.463237    9928 round_trippers.go:580]     Audit-Id: c4f67a00-72a3-4c87-82fe-25988974104d
	I0122 12:33:12.463517    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.463517    9928 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.463517    9928 pod_ready.go:81] duration metric: took 7.5422ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.463517    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.463517    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:33:12.463517    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.463517    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.463517    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.467071    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.468070    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.468115    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Audit-Id: b80a472d-0bda-4a9d-bd4b-dfcf8f417a11
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.468115    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.468115    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.468257    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1769","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0122 12:33:12.469013    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.469013    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.469013    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.469013    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.471597    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.471597    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.471597    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.471597    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.471597    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.471597    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.472121    9928 round_trippers.go:580]     Audit-Id: f50825e3-b788-4fe3-9fc5-b6983dec9710
	I0122 12:33:12.472121    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.472440    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.472440    9928 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.472440    9928 pod_ready.go:81] duration metric: took 8.9234ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.472440    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.473037    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:33:12.473037    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.473134    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.473134    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.476107    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:12.476647    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Audit-Id: ab50380e-f3a7-49b9-8b10-69d303221ed9
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.476647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.476647    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.476647    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.477094    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1740","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0122 12:33:12.477356    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:12.477356    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.477356    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.477356    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.480924    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.480924    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.481122    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.481122    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Audit-Id: 7fcb4197-6822-4628-a160-369ce857c289
	I0122 12:33:12.481122    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.481476    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:12.481592    9928 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.481592    9928 pod_ready.go:81] duration metric: took 9.1513ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.481592    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.633924    9928 request.go:629] Waited for 152.3314ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:33:12.634212    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:33:12.634212    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.634212    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.634212    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.638034    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.638034    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.638034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.638034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.638034    9928 round_trippers.go:580]     Audit-Id: 37ef7d68-a9ab-4523-b76a-bbb440bf1c88
	I0122 12:33:12.638034    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1585","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:33:12.836071    9928 request.go:629] Waited for 196.8401ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:33:12.836331    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:33:12.836331    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:12.836331    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:12.836331    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:12.839667    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:12.839667    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:12.839667    9928 round_trippers.go:580]     Audit-Id: 24ac447d-c55a-4ef4-9a3b-1cd2910bf171
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:12.840655    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:12.840655    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:12.840655    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:12 GMT
	I0122 12:33:12.841089    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1605","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3637 chars]
	I0122 12:33:12.841160    9928 pod_ready.go:92] pod "kube-proxy-mxd9x" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:12.841160    9928 pod_ready.go:81] duration metric: took 359.5666ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:12.841160    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.042134    9928 request.go:629] Waited for 200.8355ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:33:13.042229    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:33:13.042421    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.042421    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.042489    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.045795    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.045795    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.045795    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.045795    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Audit-Id: 891003cf-8ca1-4e39-b452-d46282ed326b
	I0122 12:33:13.045795    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.046653    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.047105    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"612","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0122 12:33:13.244317    9928 request.go:629] Waited for 196.3278ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:33:13.244317    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:33:13.244317    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.244683    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.244683    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.251403    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:33:13.251403    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Audit-Id: 95379aaf-e690-47af-b6c6-9358b36c9a34
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.251403    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.251403    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.251403    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.252617    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169","resourceVersion":"1577","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_28_39_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0122 12:33:13.253234    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:13.253283    9928 pod_ready.go:81] duration metric: took 412.1211ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.253349    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.446200    9928 request.go:629] Waited for 192.7762ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:33:13.446491    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:33:13.446491    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.446491    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.446491    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.450046    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.450046    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.450046    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Audit-Id: cbb8428e-a402-4184-9532-db1b6708f26e
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.450905    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.450905    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.450905    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.451045    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1753","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0122 12:33:13.633888    9928 request.go:629] Waited for 181.5719ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:13.633888    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:33:13.634227    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.634227    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.634274    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.637897    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:33:13.638892    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.638892    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.638892    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.638892    9928 round_trippers.go:580]     Audit-Id: 38aa4773-e33d-48d4-9d4e-fc4be0f88674
	I0122 12:33:13.638892    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:33:13.639570    9928 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:33:13.639570    9928 pod_ready.go:81] duration metric: took 386.2196ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:33:13.639570    9928 pod_ready.go:38] duration metric: took 10.7310092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:33:13.639570    9928 api_server.go:52] waiting for apiserver process to appear ...
	I0122 12:33:13.653065    9928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:33:13.676124    9928 command_runner.go:130] > 1830
	I0122 12:33:13.676219    9928 api_server.go:72] duration metric: took 14.486944s to wait for apiserver process to appear ...
	I0122 12:33:13.676219    9928 api_server.go:88] waiting for apiserver healthz status ...
	I0122 12:33:13.676297    9928 api_server.go:253] Checking apiserver healthz at https://172.23.191.211:8443/healthz ...
	I0122 12:33:13.685278    9928 api_server.go:279] https://172.23.191.211:8443/healthz returned 200:
	ok
	I0122 12:33:13.686406    9928 round_trippers.go:463] GET https://172.23.191.211:8443/version
	I0122 12:33:13.686406    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.686406    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.686406    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.688768    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:33:13.688768    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Content-Length: 264
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.688768    9928 round_trippers.go:580]     Audit-Id: ea666616-b12e-45de-8dbf-ad7b6fb05f9b
	I0122 12:33:13.688969    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.688969    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.688969    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.688969    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.688969    9928 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0122 12:33:13.688969    9928 api_server.go:141] control plane version: v1.28.4
	I0122 12:33:13.689097    9928 api_server.go:131] duration metric: took 12.8772ms to wait for apiserver health ...
	I0122 12:33:13.689097    9928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 12:33:13.836456    9928 request.go:629] Waited for 147.2976ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:13.836836    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:13.836836    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:13.836886    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:13.836886    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:13.842526    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:13.842526    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:13.842526    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:13 GMT
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Audit-Id: a3bf42f7-ceb6-4609-9809-9471de7fe797
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:13.842526    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:13.842526    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:13.846740    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82563 chars]
	I0122 12:33:13.850705    9928 system_pods.go:59] 12 kube-system pods found
	I0122 12:33:13.850705    9928 system_pods.go:61] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:33:13.850705    9928 system_pods.go:61] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:33:13.850705    9928 system_pods.go:74] duration metric: took 161.5464ms to wait for pod list to return data ...
	I0122 12:33:13.850705    9928 default_sa.go:34] waiting for default service account to be created ...
	I0122 12:33:14.039770    9928 request.go:629] Waited for 189.0646ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:33:14.039770    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/default/serviceaccounts
	I0122 12:33:14.039770    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.039770    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.039770    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.047971    9928 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0122 12:33:14.047971    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Audit-Id: 99c36e4d-1f0d-465e-a254-a2c6d9256814
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.047971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.047971    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Content-Length: 262
	I0122 12:33:14.047971    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.047971    9928 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4124c850-901f-4684-9b66-62ca4c3085b4","resourceVersion":"367","creationTimestamp":"2024-01-22T12:10:59Z"}}]}
	I0122 12:33:14.048961    9928 default_sa.go:45] found service account: "default"
	I0122 12:33:14.048961    9928 default_sa.go:55] duration metric: took 198.2559ms for default service account to be created ...
	I0122 12:33:14.048961    9928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0122 12:33:14.246032    9928 request.go:629] Waited for 196.8113ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:14.246313    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:33:14.246356    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.246356    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.246356    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.252841    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:33:14.252916    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.252977    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.252977    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.252977    9928 round_trippers.go:580]     Audit-Id: 76c96352-d504-429d-9af7-5865c4af3fc7
	I0122 12:33:14.254559    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82563 chars]
	I0122 12:33:14.258590    9928 system_pods.go:86] 12 kube-system pods found
	I0122 12:33:14.258590    9928 system_pods.go:89] "coredns-5dd5756b68-g9wk9" [d5b665f2-ead9-43b8-8bf6-73c2edf81f8f] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "etcd-multinode-484200" [dc38500e-1aab-483b-bc82-a1071a7b7796] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-f9bw4" [8ee26eb1-daae-41a6-8d1c-e7f3e821ebb5] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-xqpt7" [302b6561-a7e7-40f7-8b2d-9cbe86c27476] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kindnet-zs4rk" [a84b5ca3-20c8-4246-911d-8ca0f0f5bf88] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-apiserver-multinode-484200" [1a34c38d-813d-4a41-ab10-ee15c4a85906] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-controller-manager-multinode-484200" [2f20b1d1-9ba0-417b-9060-a0e50a1522ad] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-m94rg" [603ddbfd-0357-4e57-a38a-5054e02f81ee] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-mxd9x" [f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-proxy-nsjc6" [6023ce45-661a-489d-aacf-21a85a2c32e4] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "kube-scheduler-multinode-484200" [d2f95119-a5d5-4331-880e-bf6add1d87b6] Running
	I0122 12:33:14.258590    9928 system_pods.go:89] "storage-provisioner" [30187bc8-2b16-4ca8-944b-c71bfb25e36b] Running
	I0122 12:33:14.258590    9928 system_pods.go:126] duration metric: took 209.6271ms to wait for k8s-apps to be running ...
	I0122 12:33:14.258590    9928 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:33:14.272790    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:33:14.292650    9928 system_svc.go:56] duration metric: took 34.0116ms WaitForService to wait for kubelet.
	I0122 12:33:14.292650    9928 kubeadm.go:581] duration metric: took 15.1033714s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:33:14.292650    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:33:14.433395    9928 request.go:629] Waited for 140.5281ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes
	I0122 12:33:14.433454    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:33:14.433454    9928 round_trippers.go:469] Request Headers:
	I0122 12:33:14.433454    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:33:14.433454    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:33:14.438129    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:33:14.438816    9928 round_trippers.go:577] Response Headers:
	I0122 12:33:14.438816    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:33:14.438816    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:33:14.438816    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:33:14.438816    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:33:14.438920    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:33:14 GMT
	I0122 12:33:14.438920    9928 round_trippers.go:580]     Audit-Id: e950cdd5-6785-4a41-b04b-4dcb13235fd8
	I0122 12:33:14.438992    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1792"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14732 chars]
	I0122 12:33:14.440222    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440222    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440295    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:33:14.440295    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:33:14.440295    9928 node_conditions.go:105] duration metric: took 147.6451ms to run NodePressure ...
	I0122 12:33:14.440295    9928 start.go:228] waiting for startup goroutines ...
	I0122 12:33:14.440373    9928 start.go:233] waiting for cluster config update ...
	I0122 12:33:14.440373    9928 start.go:242] writing updated cluster config ...
	I0122 12:33:14.452638    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:33:14.453603    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:14.458254    9928 out.go:177] * Starting worker node multinode-484200-m02 in cluster multinode-484200
	I0122 12:33:14.458254    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:33:14.458254    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:33:14.458254    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:33:14.459746    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:33:14.459954    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:14.462164    9928 start.go:365] acquiring machines lock for multinode-484200-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:33:14.462536    9928 start.go:369] acquired machines lock for "multinode-484200-m02" in 372.4µs
	I0122 12:33:14.462747    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:33:14.462747    9928 fix.go:54] fixHost starting: m02
	I0122 12:33:14.463011    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:16.582684    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:33:16.582755    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:16.582755    9928 fix.go:102] recreateIfNeeded on multinode-484200-m02: state=Stopped err=<nil>
	W0122 12:33:16.582755    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:33:16.584123    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200-m02" ...
	I0122 12:33:16.585162    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200-m02
	I0122 12:33:19.488456    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:19.488592    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:19.488627    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:33:19.488665    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:21.740472    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:21.740472    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:21.740558    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:24.286481    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:24.286729    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:25.289539    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:27.548812    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:30.109002    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:30.109002    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:31.111517    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:33.352583    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:35.915759    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:35.915798    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:36.917392    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:39.150660    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:41.756864    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:33:41.756864    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:42.761637    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:45.038046    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:45.038046    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:45.038172    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:47.600755    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:47.601070    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:47.604070    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:49.769118    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:49.769357    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:49.769357    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:52.314596    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:52.314596    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:52.314823    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:33:52.317448    9928 machine.go:88] provisioning docker machine ...
	I0122 12:33:52.317448    9928 buildroot.go:166] provisioning hostname "multinode-484200-m02"
	I0122 12:33:52.317448    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:54.479151    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:54.479218    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:54.479218    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:33:57.057445    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:33:57.057610    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:57.062750    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:33:57.064168    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:33:57.064168    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200-m02 && echo "multinode-484200-m02" | sudo tee /etc/hostname
	I0122 12:33:57.228802    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200-m02
	
	I0122 12:33:57.228802    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:33:59.393183    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:33:59.393279    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:33:59.393279    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:01.968855    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:01.968855    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:01.977092    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:01.978356    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:01.978421    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:34:02.131227    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:34:02.131339    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:34:02.131339    9928 buildroot.go:174] setting up certificates
	I0122 12:34:02.131339    9928 provision.go:83] configureAuth start
	I0122 12:34:02.131339    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:04.270182    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:06.829698    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:06.829985    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:06.830178    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:08.996613    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:08.996676    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:08.996820    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:11.634587    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:11.634815    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:11.634815    9928 provision.go:138] copyHostCerts
	I0122 12:34:11.635113    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:34:11.635436    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:34:11.635534    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:34:11.636100    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:34:11.637227    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:34:11.637431    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:34:11.637431    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:34:11.637431    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:34:11.639242    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:34:11.639543    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:34:11.639653    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:34:11.640201    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:34:11.641276    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200-m02 san=[172.23.187.210 172.23.187.210 localhost 127.0.0.1 minikube multinode-484200-m02]
	I0122 12:34:11.804762    9928 provision.go:172] copyRemoteCerts
	I0122 12:34:11.816358    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:34:11.816358    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:13.946659    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:16.535379    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:16.535574    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:16.536130    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:16.644492    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8281139s)
	I0122 12:34:16.644492    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:34:16.645113    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:34:16.690124    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:34:16.690451    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0122 12:34:16.727351    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:34:16.727769    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:34:16.770488    9928 provision.go:86] duration metric: configureAuth took 14.6390856s
	I0122 12:34:16.770488    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:34:16.771255    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:34:16.771324    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:18.895138    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:18.895321    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:18.895415    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:21.450253    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:21.450459    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:21.456881    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:21.457571    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:21.457571    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:34:21.600469    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:34:21.600551    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:34:21.600752    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:34:21.600792    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:23.776909    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:23.776909    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:23.777003    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:26.380446    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:26.380657    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:26.386883    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:26.387511    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:26.387511    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.191.211"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:34:26.551691    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.191.211
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:34:26.552275    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:28.706607    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:28.706794    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:28.706794    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:31.309577    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:31.309577    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:31.314975    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:31.315794    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:31.315873    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:34:32.486837    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:34:32.486837    9928 machine.go:91] provisioned docker machine in 40.1692144s
	I0122 12:34:32.486837    9928 start.go:300] post-start starting for "multinode-484200-m02" (driver="hyperv")
	I0122 12:34:32.486837    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:34:32.499278    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:34:32.499278    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:34.631492    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:34.631492    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:34.631596    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:37.222269    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:37.222269    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:37.222878    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:37.338396    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8390972s)
	I0122 12:34:37.354863    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:34:37.361293    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:34:37.361293    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:34:37.361293    9928 command_runner.go:130] > ID=buildroot
	I0122 12:34:37.361293    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:34:37.361293    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:34:37.361293    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:34:37.361514    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:34:37.361949    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:34:37.363542    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:34:37.363542    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:34:37.376811    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:34:37.391587    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:34:37.433391    9928 start.go:303] post-start completed in 4.9465316s
	I0122 12:34:37.433468    9928 fix.go:56] fixHost completed within 1m22.9703581s
	I0122 12:34:37.433468    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:39.563914    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:42.135186    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:42.135186    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:42.143984    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:42.145284    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:42.145284    9928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 12:34:42.281742    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705926882.281948644
	
	I0122 12:34:42.281742    9928 fix.go:206] guest clock: 1705926882.281948644
	I0122 12:34:42.281742    9928 fix.go:219] Guest: 2024-01-22 12:34:42.281948644 +0000 UTC Remote: 2024-01-22 12:34:37.4334685 +0000 UTC m=+227.245503601 (delta=4.848480144s)
	I0122 12:34:42.281895    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:44.438459    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:46.980361    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:46.980361    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:46.986677    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:34:46.987357    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.187.210 22 <nil> <nil>}
	I0122 12:34:46.987357    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705926882
	I0122 12:34:47.135337    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:34:42 UTC 2024
	
	I0122 12:34:47.135337    9928 fix.go:226] clock set: Mon Jan 22 12:34:42 UTC 2024
	 (err=<nil>)
	I0122 12:34:47.135337    9928 start.go:83] releasing machines lock for "multinode-484200-m02", held for 1m32.6723949s
	I0122 12:34:47.135929    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:49.288328    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:49.288629    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:49.288850    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:51.868114    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:51.868114    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:51.868557    9928 out.go:177] * Found network options:
	I0122 12:34:51.869517    9928 out.go:177]   - NO_PROXY=172.23.191.211
	W0122 12:34:51.870847    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:34:51.871593    9928 out.go:177]   - NO_PROXY=172.23.191.211
	W0122 12:34:51.872394    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:34:51.874107    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:34:51.875589    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:34:51.875589    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:51.891552    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:34:51.891552    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:34:54.094223    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:54.094423    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:54.094507    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:54.126045    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:34:54.126147    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:54.126147    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:34:56.773482    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:56.773482    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:56.774171    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:56.793245    9928 main.go:141] libmachine: [stdout =====>] : 172.23.187.210
	
	I0122 12:34:56.793245    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:34:56.793799    9928 sshutil.go:53] new ssh client: &{IP:172.23.187.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:34:56.874449    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0122 12:34:56.875563    9928 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.983817s)
	W0122 12:34:56.875563    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:34:56.889428    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:34:56.987405    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:34:56.988394    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:34:56.988439    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1127822s)
	I0122 12:34:56.988439    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:34:56.988534    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:34:56.988778    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:34:57.025463    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:34:57.040624    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:34:57.076176    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:34:57.095878    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:34:57.115082    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:34:57.146139    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:34:57.178095    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:34:57.209767    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:34:57.242105    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:34:57.272128    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:34:57.302134    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:34:57.321090    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:34:57.333617    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:34:57.362699    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:34:57.537738    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:34:57.568520    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:34:57.582669    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:34:57.604090    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:34:57.604181    9928 command_runner.go:130] > [Unit]
	I0122 12:34:57.604181    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:34:57.604181    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:34:57.604181    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:34:57.604181    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:34:57.604251    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:34:57.604251    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:34:57.604251    9928 command_runner.go:130] > [Service]
	I0122 12:34:57.604293    9928 command_runner.go:130] > Type=notify
	I0122 12:34:57.604293    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:34:57.604293    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211
	I0122 12:34:57.604293    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:34:57.604293    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:34:57.604293    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:34:57.604397    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:34:57.604397    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:34:57.604397    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:34:57.604397    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:34:57.604397    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:34:57.604510    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:34:57.604540    9928 command_runner.go:130] > ExecStart=
	I0122 12:34:57.604540    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:34:57.604578    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:34:57.604578    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:34:57.604578    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:34:57.604651    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:34:57.604651    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:34:57.604651    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:34:57.604651    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:34:57.604651    9928 command_runner.go:130] > Delegate=yes
	I0122 12:34:57.604651    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:34:57.604651    9928 command_runner.go:130] > KillMode=process
	I0122 12:34:57.604651    9928 command_runner.go:130] > [Install]
	I0122 12:34:57.604651    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:34:57.617972    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:34:57.650383    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:34:57.690602    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:34:57.725651    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:34:57.759626    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:34:57.808632    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:34:57.831025    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:34:57.858981    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:34:57.875689    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:34:57.880413    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:34:57.899616    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:34:57.917486    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:34:57.956132    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:34:58.131694    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:34:58.279314    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:34:58.279428    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:34:58.322438    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:34:58.488165    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:35:00.061553    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.573381s)
	I0122 12:35:00.075733    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:35:00.112763    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:35:00.148178    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:35:00.327314    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:35:00.500201    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:35:00.666573    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:35:00.706470    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:35:00.740345    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:35:00.915381    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:35:01.027974    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:35:01.039978    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:35:01.048970    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:35:01.048970    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:35:01.048970    9928 command_runner.go:130] > Device: 16h/22d	Inode: 844         Links: 1
	I0122 12:35:01.048970    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:35:01.048970    9928 command_runner.go:130] > Access: 2024-01-22 12:35:00.941717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] > Modify: 2024-01-22 12:35:00.941717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] > Change: 2024-01-22 12:35:00.945717676 +0000
	I0122 12:35:01.048970    9928 command_runner.go:130] >  Birth: -
	I0122 12:35:01.048970    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:35:01.059970    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:35:01.066202    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:35:01.083934    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:35:01.158267    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:35:01.158267    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:35:01.158267    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:35:01.167556    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:35:01.201344    9928 command_runner.go:130] > 24.0.7
	I0122 12:35:01.210706    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:35:01.247316    9928 command_runner.go:130] > 24.0.7
	I0122 12:35:01.249332    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:35:01.249332    9928 out.go:177]   - env NO_PROXY=172.23.191.211
	I0122 12:35:01.250310    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:35:01.254329    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:35:01.257332    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:35:01.257332    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:35:01.270352    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:35:01.275635    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:35:01.294770    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.187.210
	I0122 12:35:01.294862    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:35:01.295148    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:35:01.295716    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:35:01.295882    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:35:01.296365    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:35:01.296480    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:35:01.296787    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:35:01.297365    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:35:01.297786    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:35:01.297919    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:35:01.298172    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:35:01.298551    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:35:01.298906    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:35:01.299438    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:35:01.299748    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.299859    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.300177    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.300936    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:35:01.339908    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:35:01.380054    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:35:01.415556    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:35:01.452418    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:35:01.493889    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:35:01.534081    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:35:01.588788    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:35:01.595896    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:35:01.610061    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:35:01.640344    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.647349    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.647349    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.659628    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:35:01.666680    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:35:01.677817    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:35:01.708764    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:35:01.738184    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.745096    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.745147    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.757364    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:35:01.764879    9928 command_runner.go:130] > b5213941
	I0122 12:35:01.777202    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:35:01.806212    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:35:01.836216    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.843018    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.843018    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.856594    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:35:01.864518    9928 command_runner.go:130] > 51391683
	I0122 12:35:01.880129    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:35:01.913220    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:35:01.918826    9928 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:35:01.919890    9928 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:35:01.931678    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:35:01.969244    9928 command_runner.go:130] > cgroupfs
	I0122 12:35:01.969244    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:35:01.969825    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:35:01.969874    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:35:01.969966    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.187.210 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.187.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:35:01.970158    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.187.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200-m02"
	  kubeletExtraArgs:
	    node-ip: 172.23.187.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:35:01.970296    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.187.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:35:01.982230    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:35:01.998282    9928 command_runner.go:130] > kubeadm
	I0122 12:35:01.998644    9928 command_runner.go:130] > kubectl
	I0122 12:35:01.998644    9928 command_runner.go:130] > kubelet
	I0122 12:35:01.998719    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:35:02.016417    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0122 12:35:02.031234    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0122 12:35:02.064479    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:35:02.107844    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:35:02.113428    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:35:02.145639    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:35:02.146608    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:35:02.146687    9928 start.go:304] JoinCluster: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.187.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:35:02.146912    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0122 12:35:02.146981    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:04.290059    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:06.823564    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:35:06.823802    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:06.824382    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:35:07.026530    9928 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:35:07.026530    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8795968s)
	I0122 12:35:07.026530    9928 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:07.026530    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:35:07.039516    9928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0122 12:35:07.039516    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:35:09.205563    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:09.205563    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:09.205678    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:11.784579    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:35:11.784579    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:11.785308    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:35:11.966776    9928 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0122 12:35:12.066807    9928 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-xqpt7, kube-system/kube-proxy-nsjc6
	I0122 12:35:15.089723    9928 command_runner.go:130] > node/multinode-484200-m02 cordoned
	I0122 12:35:15.089723    9928 command_runner.go:130] > pod "busybox-5b5d89c9d6-r46hz" has DeletionTimestamp older than 1 seconds, skipping
	I0122 12:35:15.089723    9928 command_runner.go:130] > node/multinode-484200-m02 drained
	I0122 12:35:15.089815    9928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.0502637s)
	I0122 12:35:15.089815    9928 node.go:108] successfully drained node "m02"
	I0122 12:35:15.090591    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:15.091310    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:15.092634    9928 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0122 12:35:15.092634    9928 round_trippers.go:463] DELETE https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:15.092634    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:15.092634    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:15.092634    9928 round_trippers.go:473]     Content-Type: application/json
	I0122 12:35:15.092634    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:15.111476    9928 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0122 12:35:15.111476    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:15.111476    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Content-Length: 171
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:15 GMT
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Audit-Id: a64fa694-352e-45b8-9bcf-0318f369ed64
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:15.111476    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:15.111710    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:15.111843    9928 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484200-m02","kind":"nodes","uid":"883dcdbf-0b87-4d5f-978e-f3b1f51b7169"}}
	I0122 12:35:15.111931    9928 node.go:124] successfully deleted node "m02"
	I0122 12:35:15.111976    9928 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:15.112002    9928 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:15.112074    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02"
	I0122 12:35:15.384211    9928 command_runner.go:130] ! W0122 12:35:15.384666    1366 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0122 12:35:15.929965    9928 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:35:17.725870    9928 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:35:17.725956    9928 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0122 12:35:17.725956    9928 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0122 12:35:17.725956    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:35:17.725956    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:35:17.726059    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:35:17.726059    9928 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0122 12:35:17.726059    9928 command_runner.go:130] > This node has joined the cluster:
	I0122 12:35:17.726110    9928 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0122 12:35:17.726110    9928 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0122 12:35:17.726110    9928 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0122 12:35:17.726110    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c29mmz.igowyrvgk826eo54 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m02": (2.6140036s)
	I0122 12:35:17.726175    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0122 12:35:18.004036    9928 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0122 12:35:18.249529    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_35_18_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:35:18.410803    9928 command_runner.go:130] > node/multinode-484200-m02 labeled
	I0122 12:35:18.410803    9928 command_runner.go:130] > node/multinode-484200-m03 labeled
	I0122 12:35:18.410803    9928 start.go:306] JoinCluster complete in 16.2641232s
	I0122 12:35:18.410965    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:35:18.410965    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:35:18.424363    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:35:18.436134    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:35:18.436734    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:35:18.436734    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:35:18.436734    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:35:18.436734    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:35:18.436734    9928 command_runner.go:130] >  Birth: -
	I0122 12:35:18.437162    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:35:18.437186    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:35:18.480511    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:35:18.884127    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:35:18.884234    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:35:18.885509    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:18.886355    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:18.887081    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:35:18.887167    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:18.887167    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:18.887167    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:18.891251    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:18.891251    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:18.891251    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:18.891251    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:18 GMT
	I0122 12:35:18.891251    9928 round_trippers.go:580]     Audit-Id: e0b95ad5-0b37-45af-92ac-c24fc2f57b93
	I0122 12:35:18.891251    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1788","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:35:18.892279    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:35:18.892279    9928 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0122 12:35:18.892279    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:35:18.905233    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:35:18.930399    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:35:18.931345    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:35:18.931787    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:35:18.931787    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:18.931787    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:18.931787    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:18.931787    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:18.936014    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:18.936014    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:18.936014    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:18 GMT
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Audit-Id: 876996f0-b9ca-46a7-817a-3142ec5841dc
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:18.936014    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:18.936014    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:18.936014    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:19.438498    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:19.438498    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:19.438498    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:19.438498    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:19.444122    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:19.444395    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:19.444395    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:19.444395    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:19 GMT
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Audit-Id: eaf2baa2-53ae-476b-91b7-acc73c490a5f
	I0122 12:35:19.444395    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:19.444731    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:19.938707    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:19.938803    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:19.938803    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:19.938803    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:19.943389    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:19.943798    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Audit-Id: 0776c4be-d016-4917-8718-f4e7886d2da2
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:19.943798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:19.943967    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:19.944034    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:19.944034    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:19 GMT
	I0122 12:35:19.944224    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.440603    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:20.440794    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:20.440794    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:20.440794    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:20.445687    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:20.445740    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Audit-Id: 18839761-1a9d-4899-98ab-8a0620da126d
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:20.445740    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:20.445740    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:20.445740    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:20 GMT
	I0122 12:35:20.446104    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.946690    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:20.946939    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:20.946939    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:20.946939    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:20.950710    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:20.950871    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:20.950871    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:20.950871    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:20 GMT
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Audit-Id: 97d554d0-d632-43de-afdf-1d5919217fcb
	I0122 12:35:20.950871    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:20.951422    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:20.951918    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:21.434748    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:21.435023    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:21.435023    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:21.435023    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:21.439669    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:21.439669    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:21.439838    9928 round_trippers.go:580]     Audit-Id: f3cb48d2-e2ea-4b1b-87b5-a166c33fa2a0
	I0122 12:35:21.439892    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:21.439917    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:21.439917    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:21.439947    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:21.439947    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:21 GMT
	I0122 12:35:21.440478    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:21.934454    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:21.934539    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:21.934539    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:21.934539    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:21.937944    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:21.937944    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:21.938761    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:21.938761    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:21.938761    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:21.938761    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:21.938869    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:21 GMT
	I0122 12:35:21.938930    9928 round_trippers.go:580]     Audit-Id: 45500207-bed1-424d-aa90-fd6c15fe8a26
	I0122 12:35:21.939115    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:22.438683    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:22.438683    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:22.438683    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:22.438823    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:22.445113    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:35:22.445113    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:22.445113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:22.445113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:22 GMT
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Audit-Id: 805dd9c1-2307-4a14-a3e7-ebddd8367c6a
	I0122 12:35:22.445113    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:22.445113    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:22.942919    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:22.943222    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:22.943222    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:22.943323    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:22.946892    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:22.946892    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Audit-Id: d662a3c5-b413-44f6-9df9-ae88efebb24b
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:22.946992    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:22.946992    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:22.947123    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:22.947123    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:22 GMT
	I0122 12:35:22.947290    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:23.446270    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:23.446410    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:23.446410    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:23.446410    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:23.449352    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:23.450373    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:23.450373    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:23 GMT
	I0122 12:35:23.450373    9928 round_trippers.go:580]     Audit-Id: 2355357e-20aa-4d24-985d-1d6a2f370f50
	I0122 12:35:23.450438    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:23.450438    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:23.450438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:23.450438    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:23.450548    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:23.450858    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:23.946793    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:23.946877    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:23.946877    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:23.946942    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:23.949368    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:23.950429    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:23.950429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:23.950429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:23 GMT
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Audit-Id: cf2b837b-515c-4cce-8ac2-f1ca7c886efb
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:23.950429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:23.950629    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:24.432970    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:24.433076    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:24.433076    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:24.433076    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:24.437021    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:24.437021    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:24.437181    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:24.437181    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:24 GMT
	I0122 12:35:24.437181    9928 round_trippers.go:580]     Audit-Id: 09727209-bb96-48f7-853f-cd41f5e11eeb
	I0122 12:35:24.437362    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:24.934468    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:24.934468    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:24.934468    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:24.934468    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:24.940556    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:24.940833    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:24.940833    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:24 GMT
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Audit-Id: 1250f2b5-d70c-4af2-98e6-7d0a1a2f588e
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:24.940914    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:24.940914    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:24.941197    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.435326    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:25.435406    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:25.435406    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:25.435406    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:25.441744    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:35:25.441744    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:25.441880    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:25.441880    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:25.441880    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:25.441930    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:25.441930    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:25 GMT
	I0122 12:35:25.441930    9928 round_trippers.go:580]     Audit-Id: a1c8d148-d025-4739-8b8d-943365906567
	I0122 12:35:25.441930    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.937508    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:25.937601    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:25.937601    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:25.937601    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:25.941542    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:25.942106    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Audit-Id: d00c83a7-3a1a-41cc-a5c5-e3ed173b291d
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:25.942106    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:25.942106    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:25.942106    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:25.942203    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:25 GMT
	I0122 12:35:25.942266    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1944","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3727 chars]
	I0122 12:35:25.942853    9928 node_ready.go:58] node "multinode-484200-m02" has status "Ready":"False"
	I0122 12:35:26.440628    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:26.440628    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.440628    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.440628    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.444634    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.444950    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.445036    9928 round_trippers.go:580]     Audit-Id: 915e36d7-5387-4580-aadf-4140f6c42993
	I0122 12:35:26.445036    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.445121    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.445212    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.445230    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.445230    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.445380    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1965","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0122 12:35:26.445931    9928 node_ready.go:49] node "multinode-484200-m02" has status "Ready":"True"
	I0122 12:35:26.445931    9928 node_ready.go:38] duration metric: took 7.514111s waiting for node "multinode-484200-m02" to be "Ready" ...
	I0122 12:35:26.445931    9928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:35:26.446171    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods
	I0122 12:35:26.446241    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.446241    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.446241    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.450623    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.451177    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.451177    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Audit-Id: 78c55c47-5380-4d97-839c-a47739f56ed2
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.451321    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.451321    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.453457    9928 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1967"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83433 chars]
	I0122 12:35:26.457610    9928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.457800    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g9wk9
	I0122 12:35:26.457800    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.457800    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.457800    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.461113    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.461113    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.461113    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.461113    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.461190    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Audit-Id: c0d44c98-09ff-4a93-bcd7-42a3adbdc300
	I0122 12:35:26.461190    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.461484    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g9wk9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d5b665f2-ead9-43b8-8bf6-73c2edf81f8f","resourceVersion":"1783","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9b727fcf-193c-475c-8cdf-bac5f391342b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b727fcf-193c-475c-8cdf-bac5f391342b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0122 12:35:26.462106    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.462106    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.462106    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.462106    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.464766    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.464766    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.464766    9928 round_trippers.go:580]     Audit-Id: 27e94612-5805-4920-a4c9-0f18bd83e7e7
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.465255    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.465255    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.465255    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.465561    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.465946    9928 pod_ready.go:92] pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.466039    9928 pod_ready.go:81] duration metric: took 8.3517ms waiting for pod "coredns-5dd5756b68-g9wk9" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.466039    9928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.466177    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-484200
	I0122 12:35:26.466177    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.466244    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.466268    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.471307    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:26.471307    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Audit-Id: 4c28e895-4ba3-45c3-a398-1d1e152ced18
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.471307    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.471307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.471307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.471881    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-484200","namespace":"kube-system","uid":"dc38500e-1aab-483b-bc82-a1071a7b7796","resourceVersion":"1771","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.191.211:2379","kubernetes.io/config.hash":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.mirror":"487d659dc59bb7ccbc4aa1b97ecfd4c1","kubernetes.io/config.seen":"2024-01-22T12:32:47.015233808Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0122 12:35:26.472829    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.472906    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.472906    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.472906    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.476108    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:26.476108    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.476108    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Audit-Id: 81e03753-195d-4d8c-9803-56ffe4a2f0e8
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.476108    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.476108    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.476857    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.477293    9928 pod_ready.go:92] pod "etcd-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.477293    9928 pod_ready.go:81] duration metric: took 11.2545ms waiting for pod "etcd-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.477341    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.477438    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-484200
	I0122 12:35:26.477438    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.477438    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.477438    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.480198    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.480198    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.481058    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Audit-Id: 5b55bdd4-488a-4aa1-941c-fa271909e030
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.481101    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.481101    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.481101    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.481101    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-484200","namespace":"kube-system","uid":"1a34c38d-813d-4a41-ab10-ee15c4a85906","resourceVersion":"1757","creationTimestamp":"2024-01-22T12:32:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.191.211:8443","kubernetes.io/config.hash":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.mirror":"58aa3387db541c8286ea4c05e28153aa","kubernetes.io/config.seen":"2024-01-22T12:32:47.015239308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:32:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0122 12:35:26.482060    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.482164    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.482164    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.482164    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.483838    9928 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0122 12:35:26.483838    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.483838    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.484824    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.484824    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Audit-Id: f5aafc79-d111-4fdb-9c12-db65a09d4bfe
	I0122 12:35:26.484824    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.484824    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.484824    9928 pod_ready.go:92] pod "kube-apiserver-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.484824    9928 pod_ready.go:81] duration metric: took 7.4522ms waiting for pod "kube-apiserver-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.484824    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.485637    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-484200
	I0122 12:35:26.485637    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.485637    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.485637    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.488732    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:26.488732    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.488732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.488732    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.488732    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.489588    9928 round_trippers.go:580]     Audit-Id: 584aaa57-9228-4112-b620-95d2de58f979
	I0122 12:35:26.489682    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-484200","namespace":"kube-system","uid":"2f20b1d1-9ba0-417b-9060-a0e50a1522ad","resourceVersion":"1769","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.mirror":"09e3d582188e7c296d7c87aae161a2a5","kubernetes.io/config.seen":"2024-01-22T12:10:46.856991974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0122 12:35:26.490490    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.490490    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.490490    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.490490    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.493111    9928 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0122 12:35:26.493413    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.493413    9928 round_trippers.go:580]     Audit-Id: ef3f1cbf-9c2c-4471-a2c2-f18b32c43e96
	I0122 12:35:26.493413    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.493478    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.493529    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.493578    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.493578    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.493578    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.494285    9928 pod_ready.go:92] pod "kube-controller-manager-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.494329    9928 pod_ready.go:81] duration metric: took 9.5051ms waiting for pod "kube-controller-manager-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.494329    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.644301    9928 request.go:629] Waited for 149.7061ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:35:26.644301    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m94rg
	I0122 12:35:26.644301    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.644529    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.644529    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.648595    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:26.648595    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.649604    9928 round_trippers.go:580]     Audit-Id: fc5145e8-15f9-4e9b-aa76-41ad10153172
	I0122 12:35:26.649604    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.649638    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.649638    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.649638    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.649638    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.649809    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m94rg","generateName":"kube-proxy-","namespace":"kube-system","uid":"603ddbfd-0357-4e57-a38a-5054e02f81ee","resourceVersion":"1740","creationTimestamp":"2024-01-22T12:10:59Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0122 12:35:26.848951    9928 request.go:629] Waited for 198.0154ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.849059    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:26.849059    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:26.849059    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:26.849219    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:26.855031    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:26.855031    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:26.855031    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:26.855031    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:26 GMT
	I0122 12:35:26.855031    9928 round_trippers.go:580]     Audit-Id: 6ef9a331-dcec-429b-b769-1094a4fb83a8
	I0122 12:35:26.855193    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:26.855425    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:26.856462    9928 pod_ready.go:92] pod "kube-proxy-m94rg" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:26.856462    9928 pod_ready.go:81] duration metric: took 362.1314ms waiting for pod "kube-proxy-m94rg" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:26.856644    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.053024    9928 request.go:629] Waited for 195.8904ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:35:27.053311    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxd9x
	I0122 12:35:27.053522    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.053576    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.053576    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.061217    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:35:27.061217    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.061217    9928 round_trippers.go:580]     Audit-Id: 6ad44ce7-2adf-4b1b-9521-3dd81b76ef37
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.061516    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.061516    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.061516    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.061806    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mxd9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f90d5b6e-4f0e-466f-8cc8-bc71ebdbad3c","resourceVersion":"1824","creationTimestamp":"2024-01-22T12:18:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5976 chars]
	I0122 12:35:27.240773    9928 request.go:629] Waited for 178.265ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:35:27.240994    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:35:27.240994    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.240994    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.241085    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.244894    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:27.245355    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.245355    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Audit-Id: 81c8ec66-3696-4977-9ae5-a542752b6b21
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.245407    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.245407    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.245460    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.248156    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"1d1fc222-9802-4017-8aee-b3310ac58592","resourceVersion":"1946","creationTimestamp":"2024-01-22T12:28:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:28:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4393 chars]
	I0122 12:35:27.248356    9928 pod_ready.go:97] node "multinode-484200-m03" hosting pod "kube-proxy-mxd9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200-m03" has status "Ready":"Unknown"
	I0122 12:35:27.248356    9928 pod_ready.go:81] duration metric: took 391.7105ms waiting for pod "kube-proxy-mxd9x" in "kube-system" namespace to be "Ready" ...
	E0122 12:35:27.248356    9928 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-484200-m03" hosting pod "kube-proxy-mxd9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-484200-m03" has status "Ready":"Unknown"
	I0122 12:35:27.248356    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.444546    9928 request.go:629] Waited for 196.019ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:35:27.444613    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsjc6
	I0122 12:35:27.444613    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.444613    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.444613    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.450231    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:27.450468    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.450468    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.450468    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Audit-Id: 6a1e5f56-e988-4402-a362-161dd3a9935e
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.450546    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.450546    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nsjc6","generateName":"kube-proxy-","namespace":"kube-system","uid":"6023ce45-661a-489d-aacf-21a85a2c32e4","resourceVersion":"1947","creationTimestamp":"2024-01-22T12:13:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7762395d-4348-422a-b76e-6e2cdf460767","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7762395d-4348-422a-b76e-6e2cdf460767\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0122 12:35:27.646434    9928 request.go:629] Waited for 195.1148ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:27.646751    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m02
	I0122 12:35:27.646751    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.646751    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.646751    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.654639    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:35:27.654798    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.654798    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.654859    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.654859    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Audit-Id: bc45893b-4b99-49af-9bac-061f9306a9c1
	I0122 12:35:27.654859    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.654859    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m02","uid":"ee2c1689-6eea-4fcd-b8b3-161d3aacd5be","resourceVersion":"1969","creationTimestamp":"2024-01-22T12:35:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_35_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3806 chars]
	I0122 12:35:27.655625    9928 pod_ready.go:92] pod "kube-proxy-nsjc6" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:27.655625    9928 pod_ready.go:81] duration metric: took 407.2669ms waiting for pod "kube-proxy-nsjc6" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.655625    9928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:27.849781    9928 request.go:629] Waited for 193.751ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:35:27.849837    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-484200
	I0122 12:35:27.849837    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:27.849837    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:27.849837    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:27.854262    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:35:27.854262    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Audit-Id: 15954030-35bd-41ab-892c-650fef65d433
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:27.854262    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:27.854262    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:27.854262    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:27 GMT
	I0122 12:35:27.854262    9928 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-484200","namespace":"kube-system","uid":"d2f95119-a5d5-4331-880e-bf6add1d87b6","resourceVersion":"1753","creationTimestamp":"2024-01-22T12:10:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.mirror":"59ca1fdff10e7440743e9ed3e4fed55c","kubernetes.io/config.seen":"2024-01-22T12:10:46.856993074Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:10:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0122 12:35:28.052801    9928 request.go:629] Waited for 197.2338ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:28.052801    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200
	I0122 12:35:28.052801    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:28.052801    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:28.052801    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:28.058438    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:35:28.059172    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Audit-Id: 167302ba-7f6e-4ab1-9c46-11eabe879fb3
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:28.059172    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:28.059172    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:28.059172    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:28 GMT
	I0122 12:35:28.059172    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-22T12:10:43Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0122 12:35:28.059957    9928 pod_ready.go:92] pod "kube-scheduler-multinode-484200" in "kube-system" namespace has status "Ready":"True"
	I0122 12:35:28.059957    9928 pod_ready.go:81] duration metric: took 404.3303ms waiting for pod "kube-scheduler-multinode-484200" in "kube-system" namespace to be "Ready" ...
	I0122 12:35:28.059957    9928 pod_ready.go:38] duration metric: took 1.6140197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 12:35:28.059957    9928 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 12:35:28.080523    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:35:28.101342    9928 system_svc.go:56] duration metric: took 41.3851ms WaitForService to wait for kubelet.
	I0122 12:35:28.101471    9928 kubeadm.go:581] duration metric: took 9.2091519s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0122 12:35:28.101471    9928 node_conditions.go:102] verifying NodePressure condition ...
	I0122 12:35:28.255369    9928 request.go:629] Waited for 153.7017ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.191.211:8443/api/v1/nodes
	I0122 12:35:28.255492    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes
	I0122 12:35:28.255492    9928 round_trippers.go:469] Request Headers:
	I0122 12:35:28.255492    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:35:28.255492    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:35:28.260429    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:35:28.260429    9928 round_trippers.go:577] Response Headers:
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:35:28 GMT
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Audit-Id: 9f7e670a-796d-4a27-81d5-e4328198c06f
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:35:28.260429    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:35:28.260429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:35:28.260429    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:35:28.261681    9928 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1973"},"items":[{"metadata":{"name":"multinode-484200","uid":"6f5ca0c2-f3eb-4600-b746-e296786b5b80","resourceVersion":"1751","creationTimestamp":"2024-01-22T12:10:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_22T12_10_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15475 chars]
	I0122 12:35:28.262852    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.262933    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.262933    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.263032    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.263050    9928 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0122 12:35:28.263050    9928 node_conditions.go:123] node cpu capacity is 2
	I0122 12:35:28.263050    9928 node_conditions.go:105] duration metric: took 161.578ms to run NodePressure ...
	I0122 12:35:28.263050    9928 start.go:228] waiting for startup goroutines ...
	I0122 12:35:28.263120    9928 start.go:242] writing updated cluster config ...
	I0122 12:35:28.279657    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:35:28.279657    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:35:28.284547    9928 out.go:177] * Starting worker node multinode-484200-m03 in cluster multinode-484200
	I0122 12:35:28.285111    9928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 12:35:28.285111    9928 cache.go:56] Caching tarball of preloaded images
	I0122 12:35:28.285111    9928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 12:35:28.285111    9928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0122 12:35:28.285779    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:35:28.292491    9928 start.go:365] acquiring machines lock for multinode-484200-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:35:28.292491    9928 start.go:369] acquired machines lock for "multinode-484200-m03" in 0s
	I0122 12:35:28.293459    9928 start.go:96] Skipping create...Using existing machine configuration
	I0122 12:35:28.293459    9928 fix.go:54] fixHost starting: m03
	I0122 12:35:28.293459    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:30.432135    9928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:35:30.432282    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:30.432282    9928 fix.go:102] recreateIfNeeded on multinode-484200-m03: state=Stopped err=<nil>
	W0122 12:35:30.432282    9928 fix.go:128] unexpected machine state, will restart: <nil>
	I0122 12:35:30.433783    9928 out.go:177] * Restarting existing hyperv VM for "multinode-484200-m03" ...
	I0122 12:35:30.434229    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-484200-m03
	I0122 12:35:32.949069    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:32.949069    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:32.949264    9928 main.go:141] libmachine: Waiting for host to start...
	I0122 12:35:32.949264    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:35.252818    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:37.781335    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:37.781335    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:38.784839    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:41.073052    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:41.073127    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:41.073226    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:43.663502    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:43.663502    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:44.666972    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:46.906535    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:46.906794    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:46.906794    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:49.514914    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:49.514914    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:50.520755    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:52.765806    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:52.765806    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:52.765891    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:35:55.390435    9928 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:35:55.390435    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:56.393757    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:35:58.683614    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:35:58.683873    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:35:58.683921    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:01.318946    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:01.319066    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:01.322354    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:03.520512    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:06.109581    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:06.109660    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:06.110021    9928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200\config.json ...
	I0122 12:36:06.112549    9928 machine.go:88] provisioning docker machine ...
	I0122 12:36:06.112631    9928 buildroot.go:166] provisioning hostname "multinode-484200-m03"
	I0122 12:36:06.112631    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:08.278582    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:08.278582    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:08.278772    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:10.876025    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:10.876229    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:10.884287    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:10.885198    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:10.885198    9928 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-484200-m03 && echo "multinode-484200-m03" | sudo tee /etc/hostname
	I0122 12:36:11.049385    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-484200-m03
	
	I0122 12:36:11.049385    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:13.249746    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:13.249805    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:13.249805    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:15.829391    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:15.829391    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:15.835464    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:15.835764    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:15.836348    9928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-484200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-484200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-484200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:36:15.990546    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:36:15.990626    9928 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:36:15.990626    9928 buildroot.go:174] setting up certificates
	I0122 12:36:15.990626    9928 provision.go:83] configureAuth start
	I0122 12:36:15.990724    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:18.137922    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:18.138227    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:18.138227    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:20.716189    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:20.716248    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:20.716323    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:22.836723    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:22.837026    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:22.837026    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:25.417189    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:25.417189    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:25.417278    9928 provision.go:138] copyHostCerts
	I0122 12:36:25.417278    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0122 12:36:25.417278    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:36:25.417278    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:36:25.418191    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:36:25.419251    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0122 12:36:25.419251    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:36:25.419251    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:36:25.420069    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:36:25.420919    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0122 12:36:25.420919    9928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:36:25.420919    9928 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:36:25.421743    9928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:36:25.422644    9928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-484200-m03 san=[172.23.181.87 172.23.181.87 localhost 127.0.0.1 minikube multinode-484200-m03]
	I0122 12:36:25.686530    9928 provision.go:172] copyRemoteCerts
	I0122 12:36:25.698886    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:36:25.698886    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:27.899158    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:30.473383    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:30.473383    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:30.473383    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:36:30.582960    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8837906s)
	I0122 12:36:30.583143    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0122 12:36:30.583462    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:36:30.633621    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0122 12:36:30.633621    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0122 12:36:30.672769    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0122 12:36:30.672913    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 12:36:30.712441    9928 provision.go:86] duration metric: configureAuth took 14.7216588s
	I0122 12:36:30.712501    9928 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:36:30.713083    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:36:30.713122    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:32.876696    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:32.876696    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:32.876911    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:35.449271    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:35.449431    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:35.455322    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:35.456018    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:35.456018    9928 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:36:35.598102    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:36:35.598102    9928 buildroot.go:70] root file system type: tmpfs
	I0122 12:36:35.598360    9928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:36:35.598424    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:37.739597    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:37.739597    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:37.739816    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:40.277829    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:40.277829    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:40.282800    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:40.283619    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:40.284246    9928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.191.211"
	Environment="NO_PROXY=172.23.191.211,172.23.187.210"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:36:40.448278    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.191.211
	Environment=NO_PROXY=172.23.191.211,172.23.187.210
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:36:40.448278    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:42.596863    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:42.597111    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:42.597229    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:45.146049    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:45.146049    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:45.152084    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:45.152793    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:45.152793    9928 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:36:46.215147    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:36:46.215256    9928 machine.go:91] provisioned docker machine in 40.1024956s
	I0122 12:36:46.215256    9928 start.go:300] post-start starting for "multinode-484200-m03" (driver="hyperv")
	I0122 12:36:46.215256    9928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:36:46.229048    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:36:46.229048    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:48.385994    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:48.386409    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:48.386492    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:50.931182    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:50.931182    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:50.931796    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:36:51.042274    9928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8132052s)
	I0122 12:36:51.055263    9928 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:36:51.061736    9928 command_runner.go:130] > NAME=Buildroot
	I0122 12:36:51.061736    9928 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0122 12:36:51.061736    9928 command_runner.go:130] > ID=buildroot
	I0122 12:36:51.061736    9928 command_runner.go:130] > VERSION_ID=2021.02.12
	I0122 12:36:51.061736    9928 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0122 12:36:51.062063    9928 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:36:51.062155    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:36:51.062714    9928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:36:51.064049    9928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:36:51.064112    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /etc/ssl/certs/63962.pem
	I0122 12:36:51.078089    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:36:51.095169    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:36:51.138950    9928 start.go:303] post-start completed in 4.9236723s
	I0122 12:36:51.140083    9928 fix.go:56] fixHost completed within 1m22.8462679s
	I0122 12:36:51.140188    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:53.316024    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:53.316024    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:53.316134    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:36:55.963590    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:36:55.963590    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:55.968500    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:36:55.969250    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:36:55.969774    9928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0122 12:36:56.110828    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705927016.111564075
	
	I0122 12:36:56.110929    9928 fix.go:206] guest clock: 1705927016.111564075
	I0122 12:36:56.110929    9928 fix.go:219] Guest: 2024-01-22 12:36:56.111564075 +0000 UTC Remote: 2024-01-22 12:36:51.1400838 +0000 UTC m=+360.951539301 (delta=4.971480275s)
	I0122 12:36:56.111024    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:36:58.328820    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:36:58.328984    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:36:58.329245    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:00.952705    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:00.952705    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:00.958346    9928 main.go:141] libmachine: Using SSH client type: native
	I0122 12:37:00.959122    9928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.181.87 22 <nil> <nil>}
	I0122 12:37:00.959122    9928 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705927016
	I0122 12:37:01.108609    9928 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:36:56 UTC 2024
	
	I0122 12:37:01.108609    9928 fix.go:226] clock set: Mon Jan 22 12:36:56 UTC 2024
	 (err=<nil>)
	I0122 12:37:01.108609    9928 start.go:83] releasing machines lock for "multinode-484200-m03", held for 1m32.8157192s
	I0122 12:37:01.108609    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:03.310877    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:03.311114    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:03.311487    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:05.871461    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:05.871461    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:05.872254    9928 out.go:177] * Found network options:
	I0122 12:37:05.873597    9928 out.go:177]   - NO_PROXY=172.23.191.211,172.23.187.210
	W0122 12:37:05.874429    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.874429    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:37:05.875426    9928 out.go:177]   - NO_PROXY=172.23.191.211,172.23.187.210
	W0122 12:37:05.876068    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.876068    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.877455    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	W0122 12:37:05.877455    9928 proxy.go:119] fail to check proxy env: Error ip not in block
	I0122 12:37:05.880332    9928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:37:05.880405    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:05.892453    9928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0122 12:37:05.892453    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:37:08.165977    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:08.166063    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m03 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:10.887903    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:10.887903    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:10.888295    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:37:10.911607    9928 main.go:141] libmachine: [stdout =====>] : 172.23.181.87
	
	I0122 12:37:10.911607    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:10.911607    9928 sshutil.go:53] new ssh client: &{IP:172.23.181.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m03\id_rsa Username:docker}
	I0122 12:37:11.091560    9928 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0122 12:37:11.091560    9928 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2112053s)
	I0122 12:37:11.091560    9928 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0122 12:37:11.091560    9928 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1990848s)
	W0122 12:37:11.091560    9928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:37:11.110495    9928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:37:11.144094    9928 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0122 12:37:11.144094    9928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:37:11.144094    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:37:11.144094    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:37:11.179213    9928 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0122 12:37:11.192555    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0122 12:37:11.222338    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:37:11.239158    9928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:37:11.253002    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:37:11.282304    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:37:11.315305    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:37:11.347421    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:37:11.380523    9928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:37:11.412609    9928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:37:11.445990    9928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:37:11.464475    9928 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0122 12:37:11.476987    9928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:37:11.506874    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:11.677863    9928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:37:11.704866    9928 start.go:475] detecting cgroup driver to use...
	I0122 12:37:11.717822    9928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:37:11.737819    9928 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Unit]
	I0122 12:37:11.737819    9928 command_runner.go:130] > Description=Docker Application Container Engine
	I0122 12:37:11.737819    9928 command_runner.go:130] > Documentation=https://docs.docker.com
	I0122 12:37:11.737819    9928 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0122 12:37:11.737819    9928 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0122 12:37:11.737819    9928 command_runner.go:130] > StartLimitBurst=3
	I0122 12:37:11.737819    9928 command_runner.go:130] > StartLimitIntervalSec=60
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Service]
	I0122 12:37:11.737819    9928 command_runner.go:130] > Type=notify
	I0122 12:37:11.737819    9928 command_runner.go:130] > Restart=on-failure
	I0122 12:37:11.737819    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211
	I0122 12:37:11.737819    9928 command_runner.go:130] > Environment=NO_PROXY=172.23.191.211,172.23.187.210
	I0122 12:37:11.737819    9928 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0122 12:37:11.737819    9928 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0122 12:37:11.737819    9928 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0122 12:37:11.737819    9928 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0122 12:37:11.737819    9928 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0122 12:37:11.737819    9928 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0122 12:37:11.737819    9928 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecStart=
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0122 12:37:11.737819    9928 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0122 12:37:11.737819    9928 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitNOFILE=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitNPROC=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > LimitCORE=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0122 12:37:11.737819    9928 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0122 12:37:11.737819    9928 command_runner.go:130] > TasksMax=infinity
	I0122 12:37:11.737819    9928 command_runner.go:130] > TimeoutStartSec=0
	I0122 12:37:11.737819    9928 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0122 12:37:11.737819    9928 command_runner.go:130] > Delegate=yes
	I0122 12:37:11.737819    9928 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0122 12:37:11.737819    9928 command_runner.go:130] > KillMode=process
	I0122 12:37:11.737819    9928 command_runner.go:130] > [Install]
	I0122 12:37:11.737819    9928 command_runner.go:130] > WantedBy=multi-user.target
	I0122 12:37:11.750830    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:37:11.785606    9928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:37:11.821592    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:37:11.854639    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:37:11.892073    9928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:37:11.941908    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:37:11.960913    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:37:11.994912    9928 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0122 12:37:12.008898    9928 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:37:12.014562    9928 command_runner.go:130] > /usr/bin/cri-dockerd
	I0122 12:37:12.028245    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:37:12.043250    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:37:12.086847    9928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:37:12.265839    9928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:37:12.420238    9928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:37:12.420305    9928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:37:12.465353    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:12.629325    9928 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:37:14.198219    9928 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5688872s)
	I0122 12:37:14.209976    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0122 12:37:14.243229    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:37:14.273038    9928 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0122 12:37:14.445198    9928 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0122 12:37:14.618187    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:14.785421    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0122 12:37:14.822324    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0122 12:37:14.859184    9928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:37:15.028776    9928 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0122 12:37:15.146791    9928 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0122 12:37:15.157786    9928 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0122 12:37:15.166777    9928 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0122 12:37:15.166777    9928 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0122 12:37:15.166777    9928 command_runner.go:130] > Device: 16h/22d	Inode: 931         Links: 1
	I0122 12:37:15.166777    9928 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0122 12:37:15.166777    9928 command_runner.go:130] > Access: 2024-01-22 12:37:15.053038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] > Modify: 2024-01-22 12:37:15.053038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] > Change: 2024-01-22 12:37:15.057038490 +0000
	I0122 12:37:15.166777    9928 command_runner.go:130] >  Birth: -
	I0122 12:37:15.166777    9928 start.go:543] Will wait 60s for crictl version
	I0122 12:37:15.177823    9928 ssh_runner.go:195] Run: which crictl
	I0122 12:37:15.183390    9928 command_runner.go:130] > /usr/bin/crictl
	I0122 12:37:15.196590    9928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 12:37:15.270314    9928 command_runner.go:130] > Version:  0.1.0
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeName:  docker
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0122 12:37:15.270314    9928 command_runner.go:130] > RuntimeApiVersion:  v1
	I0122 12:37:15.270314    9928 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0122 12:37:15.280247    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:37:15.323834    9928 command_runner.go:130] > 24.0.7
	I0122 12:37:15.334156    9928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0122 12:37:15.370011    9928 command_runner.go:130] > 24.0.7
	I0122 12:37:15.371330    9928 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0122 12:37:15.372081    9928 out.go:177]   - env NO_PROXY=172.23.191.211
	I0122 12:37:15.372984    9928 out.go:177]   - env NO_PROXY=172.23.191.211,172.23.187.210
	I0122 12:37:15.373622    9928 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0122 12:37:15.377770    9928 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:49:f0:b7 Flags:up|broadcast|multicast|running}
	I0122 12:37:15.380710    9928 ip.go:210] interface addr: fe80::2ecf:c548:6b1f:89a8/64
	I0122 12:37:15.380710    9928 ip.go:210] interface addr: 172.23.176.1/20
	I0122 12:37:15.392669    9928 ssh_runner.go:195] Run: grep 172.23.176.1	host.minikube.internal$ /etc/hosts
	I0122 12:37:15.397938    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:37:15.421671    9928 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-484200 for IP: 172.23.181.87
	I0122 12:37:15.421671    9928 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:37:15.421671    9928 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0122 12:37:15.422637    9928 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0122 12:37:15.422637    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem (1338 bytes)
	W0122 12:37:15.423620    9928 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396_empty.pem, impossibly tiny 0 bytes
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0122 12:37:15.423620    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0122 12:37:15.424618    9928 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem (1708 bytes)
	I0122 12:37:15.424618    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.425635    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem -> /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.425635    9928 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.426628    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 12:37:15.468687    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0122 12:37:15.511257    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 12:37:15.552811    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0122 12:37:15.595807    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 12:37:15.637343    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\6396.pem --> /usr/share/ca-certificates/6396.pem (1338 bytes)
	I0122 12:37:15.680156    9928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /usr/share/ca-certificates/63962.pem (1708 bytes)
	I0122 12:37:15.734911    9928 ssh_runner.go:195] Run: openssl version
	I0122 12:37:15.742183    9928 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0122 12:37:15.760611    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 12:37:15.791578    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.797576    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.797576    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 22 10:39 /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.808576    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 12:37:15.815777    9928 command_runner.go:130] > b5213941
	I0122 12:37:15.829589    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 12:37:15.857519    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6396.pem && ln -fs /usr/share/ca-certificates/6396.pem /etc/ssl/certs/6396.pem"
	I0122 12:37:15.888493    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.893509    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.894472    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 22 10:53 /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.909595    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6396.pem
	I0122 12:37:15.915584    9928 command_runner.go:130] > 51391683
	I0122 12:37:15.929840    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6396.pem /etc/ssl/certs/51391683.0"
	I0122 12:37:15.956756    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/63962.pem && ln -fs /usr/share/ca-certificates/63962.pem /etc/ssl/certs/63962.pem"
	I0122 12:37:15.988094    9928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.994910    9928 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:37:15.994910    9928 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 22 10:53 /usr/share/ca-certificates/63962.pem
	I0122 12:37:16.010663    9928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/63962.pem
	I0122 12:37:16.017968    9928 command_runner.go:130] > 3ec20f2e
	I0122 12:37:16.030527    9928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/63962.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 12:37:16.067913    9928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0122 12:37:16.073536    9928 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:37:16.074419    9928 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0122 12:37:16.083967    9928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0122 12:37:16.119629    9928 command_runner.go:130] > cgroupfs
	I0122 12:37:16.120270    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:37:16.120270    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:37:16.120270    9928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0122 12:37:16.120270    9928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.181.87 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-484200 NodeName:multinode-484200-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.191.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.181.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 12:37:16.120270    9928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.181.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-484200-m03"
	  kubeletExtraArgs:
	    node-ip: 172.23.181.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.191.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 12:37:16.120803    9928 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-484200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.181.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0122 12:37:16.133785    9928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0122 12:37:16.152856    9928 command_runner.go:130] > kubeadm
	I0122 12:37:16.152920    9928 command_runner.go:130] > kubectl
	I0122 12:37:16.152978    9928 command_runner.go:130] > kubelet
	I0122 12:37:16.153034    9928 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 12:37:16.167389    9928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0122 12:37:16.185773    9928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0122 12:37:16.212664    9928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 12:37:16.256467    9928 ssh_runner.go:195] Run: grep 172.23.191.211	control-plane.minikube.internal$ /etc/hosts
	I0122 12:37:16.263066    9928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.191.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 12:37:16.280518    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:37:16.281389    9928 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:37:16.281677    9928 start.go:304] JoinCluster: &{Name:multinode-484200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-484200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.23.191.211 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.187.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:37:16.281677    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0122 12:37:16.281677    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:37:18.412690    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:18.412837    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:18.412837    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:20.957973    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:37:20.957973    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:20.958530    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:37:21.159076    9928 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 
	I0122 12:37:21.159291    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8775932s)
	I0122 12:37:21.159410    9928 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:21.159510    9928 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:37:21.173139    9928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0122 12:37:21.173139    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:37:23.308789    9928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:37:23.308876    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:23.308930    9928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:37:25.887213    9928 main.go:141] libmachine: [stdout =====>] : 172.23.191.211
	
	I0122 12:37:25.887213    9928 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:37:25.887425    9928 sshutil.go:53] new ssh client: &{IP:172.23.191.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:37:26.073065    9928 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0122 12:37:26.144063    9928 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zs4rk, kube-system/kube-proxy-mxd9x
	I0122 12:37:26.146074    9928 command_runner.go:130] > node/multinode-484200-m03 cordoned
	I0122 12:37:26.146074    9928 command_runner.go:130] > node/multinode-484200-m03 drained
	I0122 12:37:26.146793    9928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-484200-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.9736323s)
	I0122 12:37:26.146793    9928 node.go:108] successfully drained node "m03"
	I0122 12:37:26.147788    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:26.148598    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:26.149614    9928 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0122 12:37:26.149736    9928 round_trippers.go:463] DELETE https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:26.149736    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:26.149795    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:26.149795    9928 round_trippers.go:473]     Content-Type: application/json
	I0122 12:37:26.149795    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:26.166608    9928 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0122 12:37:26.166608    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:26.167047    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:26.167047    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:26.167047    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:26.167118    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:26.167159    9928 round_trippers.go:580]     Content-Length: 171
	I0122 12:37:26.167188    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:26 GMT
	I0122 12:37:26.167188    9928 round_trippers.go:580]     Audit-Id: 69d0ac7c-b493-49d9-a02b-2b046b2aed73
	I0122 12:37:26.167188    9928 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-484200-m03","kind":"nodes","uid":"1d1fc222-9802-4017-8aee-b3310ac58592"}}
	I0122 12:37:26.167188    9928 node.go:124] successfully deleted node "m03"
	I0122 12:37:26.167188    9928 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:26.167188    9928 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:26.167188    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m03"
	I0122 12:37:26.525562    9928 command_runner.go:130] ! W0122 12:37:26.526895    1350 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0122 12:37:27.218084    9928 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 12:37:29.034781    9928 command_runner.go:130] > [preflight] Running pre-flight checks
	I0122 12:37:29.035376    9928 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0122 12:37:29.035376    9928 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0122 12:37:29.035376    9928 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0122 12:37:29.035376    9928 command_runner.go:130] > This node has joined the cluster:
	I0122 12:37:29.035376    9928 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0122 12:37:29.035376    9928 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0122 12:37:29.035376    9928 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0122 12:37:29.035376    9928 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 714t6y.s5r30pynl4isj9n7 --discovery-token-ca-cert-hash sha256:d9e856518777d1a58dcd52dda4caee52e72283bad61f5aa3ed0978c1365deb23 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-484200-m03": (2.8681756s)
	I0122 12:37:29.035376    9928 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0122 12:37:29.458711    9928 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0122 12:37:29.472501    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3 minikube.k8s.io/name=multinode-484200 minikube.k8s.io/updated_at=2024_01_22T12_37_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 12:37:29.609648    9928 command_runner.go:130] > node/multinode-484200-m02 labeled
	I0122 12:37:29.609837    9928 command_runner.go:130] > node/multinode-484200-m03 labeled
	I0122 12:37:29.609947    9928 start.go:306] JoinCluster complete in 13.3282118s
	I0122 12:37:29.609947    9928 cni.go:84] Creating CNI manager for ""
	I0122 12:37:29.609947    9928 cni.go:136] 3 nodes found, recommending kindnet
	I0122 12:37:29.623111    9928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0122 12:37:29.632099    9928 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0122 12:37:29.632099    9928 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0122 12:37:29.632099    9928 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0122 12:37:29.632099    9928 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0122 12:37:29.632099    9928 command_runner.go:130] > Access: 2024-01-22 12:31:26.825009300 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] > Change: 2024-01-22 12:31:17.097000000 +0000
	I0122 12:37:29.632099    9928 command_runner.go:130] >  Birth: -
	I0122 12:37:29.632099    9928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0122 12:37:29.632099    9928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0122 12:37:29.680877    9928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0122 12:37:30.067201    9928 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0122 12:37:30.067201    9928 command_runner.go:130] > daemonset.apps/kindnet configured
	I0122 12:37:30.068402    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:30.069148    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:30.069919    9928 round_trippers.go:463] GET https://172.23.191.211:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0122 12:37:30.069919    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.069919    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.069919    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.076883    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:30.076883    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.076883    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.076883    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Content-Length: 292
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Audit-Id: 288f2d34-7add-4a78-8187-5db203f0e6c3
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.076883    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.078250    9928 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"98df7e30-2026-4507-ad8d-589d82e9b1ba","resourceVersion":"1788","creationTimestamp":"2024-01-22T12:10:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0122 12:37:30.078250    9928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-484200" context rescaled to 1 replicas
	I0122 12:37:30.078250    9928 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.23.181.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0122 12:37:30.078921    9928 out.go:177] * Verifying Kubernetes components...
	I0122 12:37:30.094720    9928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:37:30.115078    9928 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:37:30.115979    9928 kapi.go:59] client config for multinode-484200: &rest.Config{Host:"https://172.23.191.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-484200\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x184a600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0122 12:37:30.117386    9928 node_ready.go:35] waiting up to 6m0s for node "multinode-484200-m03" to be "Ready" ...
	I0122 12:37:30.117386    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:30.117386    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.117386    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.117386    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.124734    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:37:30.124766    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.124766    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.124766    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Audit-Id: 5fcc38ce-e373-4085-a975-5b77a7c30eca
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.124766    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.124766    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:30.624292    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:30.624292    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:30.624292    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:30.624292    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:30.628884    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:30.628884    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:30.628884    9928 round_trippers.go:580]     Audit-Id: f151416e-8a39-4a0f-9f57-de8b113beabd
	I0122 12:37:30.628884    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:30.629083    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:30.629083    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:30.629134    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:30.629163    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:30 GMT
	I0122 12:37:30.629512    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:31.127136    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:31.127136    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:31.127136    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:31.127136    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:31.131504    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:31.131577    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:31.131577    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:31.131577    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:31 GMT
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Audit-Id: dc134a76-41b6-4778-a252-88d95fb4a03d
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:31.131577    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:31.131577    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:31.630947    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:31.630947    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:31.630947    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:31.630947    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:31.635799    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:31.635874    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:31.635874    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:31.635874    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:31.635973    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:31 GMT
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Audit-Id: 91618188-936b-4721-a34f-e80178259475
	I0122 12:37:31.635973    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:31.636318    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2120","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3448 chars]
	I0122 12:37:32.131828    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:32.131828    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:32.131828    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:32.131828    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:32.135453    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:32.135923    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Audit-Id: ec3966c9-ba55-4cd1-84c8-7b142e6c7ec5
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:32.135923    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:32.135923    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:32.135999    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:32.135999    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:32 GMT
	I0122 12:37:32.136028    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:32.136603    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:32.619040    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:32.619234    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:32.619234    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:32.619341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:32.622892    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:32.622892    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Audit-Id: b8f9bdd9-a462-45fd-82a6-273436a9992d
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:32.623923    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:32.624008    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:32.624008    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:32.624008    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:32 GMT
	I0122 12:37:32.624283    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:33.123418    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:33.123503    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:33.123503    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:33.123503    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:33.127126    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:33.127126    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Audit-Id: 8f330dfd-ad2d-42d5-9420-694ce4a048f4
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:33.127126    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:33.127126    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:33.127498    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:33.127498    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:33 GMT
	I0122 12:37:33.127646    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:33.626990    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:33.627075    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:33.627075    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:33.627197    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:33.630595    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:33.630595    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:33.630595    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:33.630595    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:33 GMT
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Audit-Id: ecc26cb4-2cba-49db-997e-6c8bd707d1a8
	I0122 12:37:33.630595    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:33.630595    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.120803    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:34.120910    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:34.120910    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:34.120983    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:34.124786    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:34.125846    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:34.125846    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:34.125846    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:34 GMT
	I0122 12:37:34.125846    9928 round_trippers.go:580]     Audit-Id: 0cf7d42c-9e8a-4d6e-a49d-f4455114bd4e
	I0122 12:37:34.126070    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.624178    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:34.624299    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:34.624299    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:34.624366    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:34.628778    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:34.628836    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:34.628836    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:34.628836    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:34.628836    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:34.628836    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:34 GMT
	I0122 12:37:34.628919    9928 round_trippers.go:580]     Audit-Id: ade9644b-1bcf-41b7-8f72-29026b0bdf74
	I0122 12:37:34.628919    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:34.629141    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:34.629141    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:35.125135    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:35.125135    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:35.125232    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:35.125232    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:35.129053    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:35.129053    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:35.129053    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:35.129053    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:35 GMT
	I0122 12:37:35.129053    9928 round_trippers.go:580]     Audit-Id: fa14c3b5-b7c9-4acb-bbd5-1485d019d2a1
	I0122 12:37:35.130176    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:35.628576    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:35.628813    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:35.628813    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:35.628813    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:35.632074    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:35.632074    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:35.632074    9928 round_trippers.go:580]     Audit-Id: 2f0f4919-e46f-4889-82ac-f06cafe3d8f5
	I0122 12:37:35.632074    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:35.633166    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:35.633166    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:35.633166    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:35.633166    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:35 GMT
	I0122 12:37:35.633530    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:36.128256    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:36.128345    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:36.128345    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:36.128345    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:36.134749    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:36.134749    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:36.134749    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:36.134749    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:36.134749    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:36.135320    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:36.135351    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:36 GMT
	I0122 12:37:36.135425    9928 round_trippers.go:580]     Audit-Id: 3022fa39-c6d8-40df-bdde-be00d8f96151
	I0122 12:37:36.136007    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:36.618136    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:36.618294    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:36.618294    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:36.618294    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:36.623533    9928 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0122 12:37:36.623829    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:36.623829    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:36.623829    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:36 GMT
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Audit-Id: 08009933-fd22-4427-b84d-c6feec4ee5dc
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:36.623829    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:36.624113    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:37.118318    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:37.118387    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:37.118387    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:37.118387    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:37.123037    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:37.123037    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:37.123037    9928 round_trippers.go:580]     Audit-Id: 2459e37a-dc99-42f5-a625-3836fd5c9b09
	I0122 12:37:37.123037    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:37.123724    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:37.123724    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:37.123724    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:37.123724    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:37 GMT
	I0122 12:37:37.123863    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:37.124277    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:37.619323    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:37.619408    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:37.619408    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:37.619488    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:37.623273    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:37.624087    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:37.624087    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:37.624087    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:37 GMT
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Audit-Id: 9de9f837-6eaa-4986-b8b9-b72ed092fe22
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:37.624087    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:37.624425    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2124","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3557 chars]
	I0122 12:37:38.126302    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:38.126302    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:38.126302    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:38.126302    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:38.131239    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:38.131307    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:38.131307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:38.131307    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:38 GMT
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Audit-Id: 864dc71b-7c7b-45d8-82cc-3a9b1ca9ee4e
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:38.131307    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:38.132128    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:38.629131    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:38.629238    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:38.629238    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:38.629238    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:38.633202    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:38.633202    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:38.633202    9928 round_trippers.go:580]     Audit-Id: 0839a7a1-4c67-4179-8a8f-02698e59aef3
	I0122 12:37:38.633362    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:38.633362    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:38.633420    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:38.633420    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:38.633478    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:38 GMT
	I0122 12:37:38.633607    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:39.132242    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:39.132242    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:39.132337    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:39.132337    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:39.138661    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:39.138661    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Audit-Id: 2076114a-0d2e-492f-9547-ef7549ba0454
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:39.139196    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:39.139196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:39.139196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:39.139265    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:39 GMT
	I0122 12:37:39.139593    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:39.140047    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:39.632359    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:39.632502    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:39.632502    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:39.632502    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:39.636723    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:39.636723    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:39.636894    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:39.636894    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:39 GMT
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Audit-Id: 2b90138f-3de5-4412-a5fc-cf3496709651
	I0122 12:37:39.636894    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:39.637125    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:39.637224    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:40.121889    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:40.121889    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:40.121889    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:40.121889    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:40.125534    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:40.126419    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:40.126419    9928 round_trippers.go:580]     Audit-Id: d8434f29-a3d9-4a59-ba5b-8ad14a69b0b2
	I0122 12:37:40.126419    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:40.126486    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:40.126515    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:40.126515    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:40.126515    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:40 GMT
	I0122 12:37:40.126680    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:40.625175    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:40.625270    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:40.625270    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:40.625270    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:40.629592    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:40.629679    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:40.629679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:40.629679    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:40 GMT
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Audit-Id: a6a354b0-7516-46ae-909c-2e8cfad1c3e0
	I0122 12:37:40.629679    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:40.629938    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.127025    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:41.127117    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:41.127117    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:41.127117    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:41.133535    9928 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0122 12:37:41.133571    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Audit-Id: c607a285-415e-46b3-988e-b4bdaee7c771
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:41.133571    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:41.133671    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:41.133671    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:41.133671    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:41 GMT
	I0122 12:37:41.134278    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.625300    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:41.625300    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:41.625300    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:41.625300    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:41.628901    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:41.629900    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:41.630037    9928 round_trippers.go:580]     Audit-Id: 36eddb3e-ff21-40d0-9072-0e366e536f6c
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:41.630058    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:41.630058    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:41.630058    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:41 GMT
	I0122 12:37:41.630909    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:41.631510    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:42.125702    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:42.125702    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:42.125879    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:42.125879    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:42.130107    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:42.130196    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Audit-Id: 99e86887-a766-4c42-88f2-0305362f9a8b
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:42.130196    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:42.130196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:42.130196    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:42.130307    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:42 GMT
	I0122 12:37:42.130307    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:42.628133    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:42.628133    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:42.628133    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:42.628133    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:42.628133    9928 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0122 12:37:42.628133    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:42.628133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:42 GMT
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Audit-Id: c450f50d-ca7d-44ed-8926-fff98023e7c4
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:42.628133    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:42.628133    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:42.628133    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.128191    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:43.128191    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:43.128341    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:43.128341    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:43.132581    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:43.132606    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Audit-Id: a6063ace-e36e-42b9-86c5-7bbd339a98a2
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:43.132606    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:43.132606    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:43.132606    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:43 GMT
	I0122 12:37:43.133029    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.631013    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:43.631106    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:43.631106    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:43.631187    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:43.634427    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:43.634427    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Audit-Id: 46f00bc3-8288-489f-bebd-fda086581185
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:43.634427    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:43.634427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:43.634427    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:43.634979    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:43 GMT
	I0122 12:37:43.635222    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:43.635695    9928 node_ready.go:58] node "multinode-484200-m03" has status "Ready":"False"
	I0122 12:37:44.131951    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:44.132022    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:44.132022    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:44.132098    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:44.139901    9928 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0122 12:37:44.139972    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:44.139972    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:44.139972    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:44 GMT
	I0122 12:37:44.140037    9928 round_trippers.go:580]     Audit-Id: 45acc71c-ddfa-4fb3-8200-9b3bbff9d04c
	I0122 12:37:44.140037    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:44.140084    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:44.140112    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:44.140473    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:44.632039    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:44.632039    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:44.632039    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:44.632293    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:44.637149    9928 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0122 12:37:44.637511    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:44.637511    9928 round_trippers.go:580]     Audit-Id: b0e13496-e994-4fab-a98b-f228da201735
	I0122 12:37:44.637661    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:44.637686    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:44.637686    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:44.637686    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:44.637686    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:44 GMT
	I0122 12:37:44.637686    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0122 12:37:45.132817    9928 round_trippers.go:463] GET https://172.23.191.211:8443/api/v1/nodes/multinode-484200-m03
	I0122 12:37:45.132817    9928 round_trippers.go:469] Request Headers:
	I0122 12:37:45.132817    9928 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0122 12:37:45.132817    9928 round_trippers.go:473]     Accept: application/json, */*
	I0122 12:37:45.136234    9928 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0122 12:37:45.136234    9928 round_trippers.go:577] Response Headers:
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Content-Type: application/json
	I0122 12:37:45.136797    9928 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3bb51a15-6432-411c-96e9-22e3d36227b9
	I0122 12:37:45.136797    9928 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ec22182b-25f9-4c3d-9f22-94fbbb1cff3b
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Date: Mon, 22 Jan 2024 12:37:45 GMT
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Audit-Id: 1da10034-ed00-41cd-a9c4-7b3a6c3e0e17
	I0122 12:37:45.136797    9928 round_trippers.go:580]     Cache-Control: no-cache, private
	I0122 12:37:45.137016    9928 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-484200-m03","uid":"7eb75e93-1147-4ebd-b614-e25d06f2e612","resourceVersion":"2135","creationTimestamp":"2024-01-22T12:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-484200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02088786cd935ebc1444d13e603368558b96d0e3","minikube.k8s.io/name":"multinode-484200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_22T12_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-22T12:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-22 12:31:19 UTC, ends at Mon 2024-01-22 12:38:11 UTC. --
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.164247275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.164281176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.164869681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.167575808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.167741709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.167775210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.167786710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:10 multinode-484200 cri-dockerd[1250]: time="2024-01-22T12:33:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3776e3fb92e1dce81a67cac401197f4e9dcd833159d06e937b2c588f78bd427b/resolv.conf as [nameserver 172.23.176.1]"
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.911700859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.912338266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.912472867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:33:10 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:10.912509767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:11 multinode-484200 cri-dockerd[1250]: time="2024-01-22T12:33:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e0056d40bd2ae3f28c9af1ed31f6e5fb64da05bec2c67a1e3c3823c3ac20127d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 22 12:33:11 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:11.190032257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:33:11 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:11.190720263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:11 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:11.190777963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:33:11 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:11.190794964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:26 multinode-484200 dockerd[1039]: time="2024-01-22T12:33:26.381850710Z" level=info msg="ignoring event" container=ad2a662b28c0fb6fc95898bfecacab7a7643f7bb6ad31d63ae978fffa2f8e4af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 22 12:33:26 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:26.382702513Z" level=info msg="shim disconnected" id=ad2a662b28c0fb6fc95898bfecacab7a7643f7bb6ad31d63ae978fffa2f8e4af namespace=moby
	Jan 22 12:33:26 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:26.383119815Z" level=warning msg="cleaning up after shim disconnected" id=ad2a662b28c0fb6fc95898bfecacab7a7643f7bb6ad31d63ae978fffa2f8e4af namespace=moby
	Jan 22 12:33:26 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:26.383168515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 22 12:33:39 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:39.179328884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 12:33:39 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:39.179458784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 12:33:39 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:39.180463086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 12:33:39 multinode-484200 dockerd[1045]: time="2024-01-22T12:33:39.180500486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad7934cdde342       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   7396534a2b0e3       storage-provisioner
	7e7d64fe085b3       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   e0056d40bd2ae       busybox-5b5d89c9d6-flgvf
	c65cf28e43f16       ead0a4a53df89                                                                                         5 minutes ago       Running             coredns                   1                   3776e3fb92e1d       coredns-5dd5756b68-g9wk9
	a6f3c78e17911       c7d1297425461                                                                                         5 minutes ago       Running             kindnet-cni               1                   a48f46d3b108f       kindnet-f9bw4
	ad2a662b28c0f       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   7396534a2b0e3       storage-provisioner
	4b912976df5cc       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   7da42eed3db22       kube-proxy-m94rg
	7229244e2aeb1       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   96bb951dfcee8       etcd-multinode-484200
	aed3428f2a6ee       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   6105fb5af43d1       kube-scheduler-multinode-484200
	9ccd405c48742       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   9dcda7765e032       kube-controller-manager-multinode-484200
	83e9346ed4072       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   b2dcdfcec0062       kube-apiserver-multinode-484200
	c414da76cc7d1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   adbe8caf1decc       busybox-5b5d89c9d6-flgvf
	f56e8f9de45ec       ead0a4a53df89                                                                                         26 minutes ago      Exited              coredns                   0                   e46fc8eaac508       coredns-5dd5756b68-g9wk9
	e55da8df34e58       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              27 minutes ago      Exited              kindnet-cni               0                   e27d0a036e725       kindnet-f9bw4
	3f4ebe9f43eab       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   c1a37f7be8afd       kube-proxy-m94rg
	eae98e092eb29       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   861530d246aca       kube-scheduler-multinode-484200
	dca0a89d970d6       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   24388c3440dd3       kube-controller-manager-multinode-484200
	
	
	==> coredns [c65cf28e43f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a9e6d52ea38bb1d541e41f968d30641b2b346f270fd5cb95216254c1f2330bb2b9715ab07840a759252c526786b95ee2dc393b58c195fd2a54b587422985693b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55110 - 36631 "HINFO IN 6552672612339697103.3167965997653694166. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02952107s
	
	
	==> coredns [f56e8f9de45e] <==
	[INFO] 10.244.1.2:41421 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000811s
	[INFO] 10.244.1.2:34776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184201s
	[INFO] 10.244.1.2:51063 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000779s
	[INFO] 10.244.1.2:36577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001729s
	[INFO] 10.244.1.2:46879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052901s
	[INFO] 10.244.1.2:51529 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000665s
	[INFO] 10.244.1.2:51244 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145301s
	[INFO] 10.244.0.3:38031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190001s
	[INFO] 10.244.0.3:60417 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171501s
	[INFO] 10.244.0.3:59648 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001408s
	[INFO] 10.244.0.3:33828 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146001s
	[INFO] 10.244.1.2:33852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124901s
	[INFO] 10.244.1.2:59520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074101s
	[INFO] 10.244.1.2:60914 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000894s
	[INFO] 10.244.1.2:40296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049901s
	[INFO] 10.244.0.3:35445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001051s
	[INFO] 10.244.0.3:47826 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000766502s
	[INFO] 10.244.0.3:33032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001394s
	[INFO] 10.244.0.3:37524 - 5 "PTR IN 1.176.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107s
	[INFO] 10.244.1.2:47681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001184s
	[INFO] 10.244.1.2:53042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105s
	[INFO] 10.244.1.2:34838 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144601s
	[INFO] 10.244.1.2:55904 - 5 "PTR IN 1.176.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000817s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-484200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=multinode-484200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_22T12_10_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 12:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 12:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 12:38:09 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 12:38:09 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 12:38:09 +0000   Mon, 22 Jan 2024 12:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 12:38:09 +0000   Mon, 22 Jan 2024 12:33:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.191.211
	  Hostname:    multinode-484200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dec99e22bd964a7ba02a38f27bdb549e
	  System UUID:                e9e83410-9a00-1d40-8577-0dce0e816b57
	  Boot ID:                    78b805df-55ea-4a9a-a80c-6a61b3cdac15
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-flgvf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-g9wk9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-484200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 kindnet-f9bw4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-484200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-multinode-484200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-m94rg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-484200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-484200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-484200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-484200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-484200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-484200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-484200 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-484200 event: Registered Node multinode-484200 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-484200 status is now: NodeReady
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node multinode-484200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node multinode-484200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node multinode-484200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                   node-controller  Node multinode-484200 event: Registered Node multinode-484200 in Controller
	
	
	Name:               multinode-484200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=multinode-484200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_22T12_37_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 12:35:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 12:38:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 12:35:26 +0000   Mon, 22 Jan 2024 12:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 12:35:26 +0000   Mon, 22 Jan 2024 12:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 12:35:26 +0000   Mon, 22 Jan 2024 12:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 12:35:26 +0000   Mon, 22 Jan 2024 12:35:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.187.210
	  Hostname:    multinode-484200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9b92a5c92874cc0a65a6e368b65222d
	  System UUID:                5daa307e-d17a-a94a-b576-0caf5f5dd761
	  Boot ID:                    304ad9b7-8502-4f1c-ab5a-1f5005b8ee2f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7gnl2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 kindnet-xqpt7               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-nsjc6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 24m                    kube-proxy  
	  Normal  Starting                 2m53s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)      kubelet     Node multinode-484200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)      kubelet     Node multinode-484200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)      kubelet     Node multinode-484200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m                    kubelet     Node multinode-484200-m02 status is now: NodeReady
	  Normal  Starting                 2m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m55s (x2 over 2m55s)  kubelet     Node multinode-484200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x2 over 2m55s)  kubelet     Node multinode-484200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x2 over 2m55s)  kubelet     Node multinode-484200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m45s                  kubelet     Node multinode-484200-m02 status is now: NodeReady
	
	
	Name:               multinode-484200-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-484200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02088786cd935ebc1444d13e603368558b96d0e3
	                    minikube.k8s.io/name=multinode-484200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_22T12_37_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jan 2024 12:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-484200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jan 2024 12:38:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jan 2024 12:38:05 +0000   Mon, 22 Jan 2024 12:37:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jan 2024 12:38:05 +0000   Mon, 22 Jan 2024 12:37:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jan 2024 12:38:05 +0000   Mon, 22 Jan 2024 12:37:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jan 2024 12:38:05 +0000   Mon, 22 Jan 2024 12:38:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.181.87
	  Hostname:    multinode-484200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 026425abd3c34d39a2a3944b867d028d
	  System UUID:                32a64ce8-607d-3741-b8de-d54484049663
	  Boot ID:                    54277625-def6-431b-9747-350589cab1a1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zs4rk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-mxd9x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m32s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 31s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x5 over 19m)      kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x5 over 19m)      kubelet          Node multinode-484200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x5 over 19m)      kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-484200-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m35s (x2 over 9m35s)  kubelet          Node multinode-484200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m35s (x2 over 9m35s)  kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m35s (x2 over 9m35s)  kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m23s                  kubelet          Node multinode-484200-m03 status is now: NodeReady
	  Normal  Starting                 45s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x2 over 45s)      kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x2 over 45s)      kubelet          Node multinode-484200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x2 over 45s)      kubelet          Node multinode-484200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           41s                    node-controller  Node multinode-484200-m03 event: Registered Node multinode-484200-m03 in Controller
	  Normal  NodeReady                7s                     kubelet          Node multinode-484200-m03 status is now: NodeReady
	
	
	==> dmesg <==
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000068] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.909576] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.413596] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.355612] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.047590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan22 12:32] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.150172] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[ +25.674501] systemd-fstab-generator[965]: Ignoring "noauto" for root device
	[  +0.577597] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.170567] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[  +0.213856] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +1.438543] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.412728] systemd-fstab-generator[1205]: Ignoring "noauto" for root device
	[  +0.169553] systemd-fstab-generator[1216]: Ignoring "noauto" for root device
	[  +0.190085] systemd-fstab-generator[1227]: Ignoring "noauto" for root device
	[  +0.244785] systemd-fstab-generator[1242]: Ignoring "noauto" for root device
	[  +3.821951] systemd-fstab-generator[1467]: Ignoring "noauto" for root device
	[  +0.839463] kauditd_printk_skb: 29 callbacks suppressed
	[Jan22 12:33] kauditd_printk_skb: 18 callbacks suppressed
	[Jan22 12:34] hrtimer: interrupt took 2074405 ns
	
	
	==> etcd [7229244e2aeb] <==
	{"level":"info","ts":"2024-01-22T12:32:49.963055Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-22T12:32:49.963091Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-22T12:32:49.964472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec switched to configuration voters=(2554119081614201324)"}
	{"level":"info","ts":"2024-01-22T12:32:49.965269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"62c40a13fe136c89","local-member-id":"23720dc5bda951ec","added-peer-id":"23720dc5bda951ec","added-peer-peer-urls":["https://172.23.177.163:2380"]}
	{"level":"info","ts":"2024-01-22T12:32:49.965501Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62c40a13fe136c89","local-member-id":"23720dc5bda951ec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-22T12:32:49.965597Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-22T12:32:49.986095Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-22T12:32:49.98729Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"23720dc5bda951ec","initial-advertise-peer-urls":["https://172.23.191.211:2380"],"listen-peer-urls":["https://172.23.191.211:2380"],"advertise-client-urls":["https://172.23.191.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.191.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-22T12:32:49.987552Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-22T12:32:49.988215Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.191.211:2380"}
	{"level":"info","ts":"2024-01-22T12:32:49.98845Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.191.211:2380"}
	{"level":"info","ts":"2024-01-22T12:32:51.417055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-22T12:32:51.417147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-22T12:32:51.417177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec received MsgPreVoteResp from 23720dc5bda951ec at term 2"}
	{"level":"info","ts":"2024-01-22T12:32:51.417306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became candidate at term 3"}
	{"level":"info","ts":"2024-01-22T12:32:51.417352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec received MsgVoteResp from 23720dc5bda951ec at term 3"}
	{"level":"info","ts":"2024-01-22T12:32:51.417392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23720dc5bda951ec became leader at term 3"}
	{"level":"info","ts":"2024-01-22T12:32:51.417508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 23720dc5bda951ec elected leader 23720dc5bda951ec at term 3"}
	{"level":"info","ts":"2024-01-22T12:32:51.426476Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-22T12:32:51.43065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-22T12:32:51.426427Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"23720dc5bda951ec","local-member-attributes":"{Name:multinode-484200 ClientURLs:[https://172.23.191.211:2379]}","request-path":"/0/members/23720dc5bda951ec/attributes","cluster-id":"62c40a13fe136c89","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-22T12:32:51.426633Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-22T12:32:51.441085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-22T12:32:51.441189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-22T12:32:51.446732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.191.211:2379"}
	
	
	==> kernel <==
	 12:38:12 up 7 min,  0 users,  load average: 1.25, 0.90, 0.43
	Linux multinode-484200 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [a6f3c78e1791] <==
	I0122 12:37:31.354795       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.23.181.87 Flags: [] Table: 0} 
	I0122 12:37:41.361913       1 main.go:223] Handling node with IPs: map[172.23.191.211:{}]
	I0122 12:37:41.362087       1 main.go:227] handling current node
	I0122 12:37:41.362266       1 main.go:223] Handling node with IPs: map[172.23.187.210:{}]
	I0122 12:37:41.362595       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:37:41.363306       1 main.go:223] Handling node with IPs: map[172.23.181.87:{}]
	I0122 12:37:41.363374       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.2.0/24] 
	I0122 12:37:51.379039       1 main.go:223] Handling node with IPs: map[172.23.191.211:{}]
	I0122 12:37:51.379082       1 main.go:227] handling current node
	I0122 12:37:51.379095       1 main.go:223] Handling node with IPs: map[172.23.187.210:{}]
	I0122 12:37:51.379102       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:37:51.379213       1 main.go:223] Handling node with IPs: map[172.23.181.87:{}]
	I0122 12:37:51.379225       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.2.0/24] 
	I0122 12:38:01.414412       1 main.go:223] Handling node with IPs: map[172.23.191.211:{}]
	I0122 12:38:01.414459       1 main.go:227] handling current node
	I0122 12:38:01.414471       1 main.go:223] Handling node with IPs: map[172.23.187.210:{}]
	I0122 12:38:01.414478       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:38:01.415054       1 main.go:223] Handling node with IPs: map[172.23.181.87:{}]
	I0122 12:38:01.415152       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.2.0/24] 
	I0122 12:38:11.429203       1 main.go:223] Handling node with IPs: map[172.23.191.211:{}]
	I0122 12:38:11.429310       1 main.go:227] handling current node
	I0122 12:38:11.429322       1 main.go:223] Handling node with IPs: map[172.23.187.210:{}]
	I0122 12:38:11.429329       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:38:11.429511       1 main.go:223] Handling node with IPs: map[172.23.181.87:{}]
	I0122 12:38:11.429524       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e55da8df34e5] <==
	I0122 12:28:52.858469       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.3.0/24] 
	I0122 12:29:02.870477       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:29:02.870621       1 main.go:227] handling current node
	I0122 12:29:02.870639       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:29:02.870648       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:29:02.871055       1 main.go:223] Handling node with IPs: map[172.23.187.157:{}]
	I0122 12:29:02.871090       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.3.0/24] 
	I0122 12:29:12.887639       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:29:12.887806       1 main.go:227] handling current node
	I0122 12:29:12.887824       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:29:12.887834       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:29:12.888411       1 main.go:223] Handling node with IPs: map[172.23.187.157:{}]
	I0122 12:29:12.888521       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.3.0/24] 
	I0122 12:29:22.898817       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:29:22.898912       1 main.go:227] handling current node
	I0122 12:29:22.898927       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:29:22.898936       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:29:22.899520       1 main.go:223] Handling node with IPs: map[172.23.187.157:{}]
	I0122 12:29:22.899602       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.3.0/24] 
	I0122 12:29:32.915659       1 main.go:223] Handling node with IPs: map[172.23.177.163:{}]
	I0122 12:29:32.915783       1 main.go:227] handling current node
	I0122 12:29:32.915799       1 main.go:223] Handling node with IPs: map[172.23.185.74:{}]
	I0122 12:29:32.915808       1 main.go:250] Node multinode-484200-m02 has CIDR [10.244.1.0/24] 
	I0122 12:29:32.916031       1 main.go:223] Handling node with IPs: map[172.23.187.157:{}]
	I0122 12:29:32.916081       1 main.go:250] Node multinode-484200-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [83e9346ed407] <==
	I0122 12:32:53.290404       1 controller.go:116] Starting legacy_token_tracking_controller
	I0122 12:32:53.305170       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0122 12:32:53.283420       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0122 12:32:53.485902       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0122 12:32:53.495692       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0122 12:32:53.496643       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0122 12:32:53.504903       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0122 12:32:53.505108       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0122 12:32:53.505192       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0122 12:32:53.505300       1 shared_informer.go:318] Caches are synced for configmaps
	I0122 12:32:53.506229       1 aggregator.go:166] initial CRD sync complete...
	I0122 12:32:53.506422       1 autoregister_controller.go:141] Starting autoregister controller
	I0122 12:32:53.506519       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0122 12:32:53.506656       1 cache.go:39] Caches are synced for autoregister controller
	I0122 12:32:53.530268       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0122 12:32:53.540237       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0122 12:32:54.321579       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0122 12:32:54.753091       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.23.191.211]
	I0122 12:32:54.756241       1 controller.go:624] quota admission added evaluator for: endpoints
	I0122 12:32:54.763616       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0122 12:32:57.115682       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0122 12:32:57.360797       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0122 12:32:57.377150       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0122 12:32:57.463529       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0122 12:32:57.473413       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [9ccd405c4874] <==
	I0122 12:35:12.107720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="30.799755ms"
	I0122 12:35:12.117839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.057018ms"
	I0122 12:35:12.119298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="125.5µs"
	I0122 12:35:12.119608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="84.2µs"
	I0122 12:35:16.518151       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484200-m02\" does not exist"
	I0122 12:35:16.520933       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-r46hz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-r46hz"
	I0122 12:35:16.526263       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484200-m02" podCIDRs=["10.244.1.0/24"]
	I0122 12:35:17.423173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54µs"
	I0122 12:35:26.145358       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:35:26.167631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="97.701µs"
	I0122 12:35:26.664394       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-r46hz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-r46hz"
	I0122 12:35:31.508266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="161.5µs"
	I0122 12:35:31.761308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.5µs"
	I0122 12:35:31.769662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.6µs"
	I0122 12:35:32.723491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="92.5µs"
	I0122 12:35:32.760052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.2µs"
	I0122 12:35:34.744446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.138113ms"
	I0122 12:35:34.747254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.2µs"
	I0122 12:37:26.170077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:37:26.689729       1 event.go:307] "Event occurred" object="multinode-484200-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-484200-m03 event: Removing Node multinode-484200-m03 from Controller"
	I0122 12:37:27.825331       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484200-m03\" does not exist"
	I0122 12:37:27.825719       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:37:27.847380       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484200-m03" podCIDRs=["10.244.2.0/24"]
	I0122 12:37:31.690593       1 event.go:307] "Event occurred" object="multinode-484200-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-484200-m03 event: Registered Node multinode-484200-m03 in Controller"
	I0122 12:38:05.250645       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	
	
	==> kube-controller-manager [dca0a89d970d] <==
	I0122 12:14:36.609843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.44839ms"
	I0122 12:14:36.657840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.66787ms"
	I0122 12:14:36.676563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.353765ms"
	I0122 12:14:36.676927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.2µs"
	I0122 12:14:38.897381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.924528ms"
	I0122 12:14:38.897940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="79.101µs"
	I0122 12:14:39.697878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.230546ms"
	I0122 12:14:39.697940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.3µs"
	I0122 12:18:24.168561       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484200-m03\" does not exist"
	I0122 12:18:24.169421       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:18:24.192299       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-484200-m03"
	I0122 12:18:24.192539       1 event.go:307] "Event occurred" object="multinode-484200-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-484200-m03 event: Registered Node multinode-484200-m03 in Controller"
	I0122 12:18:24.208145       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zs4rk"
	I0122 12:18:24.213204       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484200-m03" podCIDRs=["10.244.2.0/24"]
	I0122 12:18:24.213555       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mxd9x"
	I0122 12:18:39.444630       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:26:14.311253       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:26:14.313050       1 event.go:307] "Event occurred" object="multinode-484200-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-484200-m03 status is now: NodeNotReady"
	I0122 12:26:14.327826       1 event.go:307] "Event occurred" object="kube-system/kindnet-zs4rk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0122 12:26:14.359826       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-mxd9x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0122 12:28:35.929912       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:28:37.296855       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-484200-m03\" does not exist"
	I0122 12:28:37.298113       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m02"
	I0122 12:28:37.304960       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-484200-m03" podCIDRs=["10.244.3.0/24"]
	I0122 12:28:49.893955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-484200-m03"
	
	
	==> kube-proxy [3f4ebe9f43ea] <==
	I0122 12:11:01.288123       1 server_others.go:69] "Using iptables proxy"
	I0122 12:11:01.302839       1 node.go:141] Successfully retrieved node IP: 172.23.177.163
	I0122 12:11:01.345402       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0122 12:11:01.345469       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 12:11:01.349832       1 server_others.go:152] "Using iptables Proxier"
	I0122 12:11:01.350084       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0122 12:11:01.350520       1 server.go:846] "Version info" version="v1.28.4"
	I0122 12:11:01.350558       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 12:11:01.352054       1 config.go:188] "Starting service config controller"
	I0122 12:11:01.352099       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0122 12:11:01.352423       1 config.go:97] "Starting endpoint slice config controller"
	I0122 12:11:01.352457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0122 12:11:01.355014       1 config.go:315] "Starting node config controller"
	I0122 12:11:01.355026       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0122 12:11:01.452781       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0122 12:11:01.452832       1 shared_informer.go:318] Caches are synced for service config
	I0122 12:11:01.455329       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [4b912976df5c] <==
	I0122 12:32:55.991752       1 server_others.go:69] "Using iptables proxy"
	I0122 12:32:56.159449       1 node.go:141] Successfully retrieved node IP: 172.23.191.211
	I0122 12:32:56.327918       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0122 12:32:56.328209       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 12:32:56.332630       1 server_others.go:152] "Using iptables Proxier"
	I0122 12:32:56.333575       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0122 12:32:56.336375       1 server.go:846] "Version info" version="v1.28.4"
	I0122 12:32:56.336528       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 12:32:56.339428       1 config.go:188] "Starting service config controller"
	I0122 12:32:56.340504       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0122 12:32:56.341285       1 config.go:315] "Starting node config controller"
	I0122 12:32:56.341921       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0122 12:32:56.343639       1 config.go:97] "Starting endpoint slice config controller"
	I0122 12:32:56.343825       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0122 12:32:56.441744       1 shared_informer.go:318] Caches are synced for service config
	I0122 12:32:56.443188       1 shared_informer.go:318] Caches are synced for node config
	I0122 12:32:56.444863       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [aed3428f2a6e] <==
	I0122 12:32:51.045512       1 serving.go:348] Generated self-signed cert in-memory
	W0122 12:32:53.379695       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0122 12:32:53.380139       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0122 12:32:53.380347       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0122 12:32:53.380534       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0122 12:32:53.466398       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0122 12:32:53.466631       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 12:32:53.474900       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0122 12:32:53.477040       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0122 12:32:53.475467       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0122 12:32:53.475481       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0122 12:32:53.577367       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eae98e092eb2] <==
	E0122 12:10:44.053181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0122 12:10:44.075736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.075762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.093151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 12:10:44.093232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0122 12:10:44.126909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0122 12:10:44.127018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0122 12:10:44.181040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.181090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.185425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 12:10:44.185464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0122 12:10:44.361095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.361261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.371930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0122 12:10:44.372034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0122 12:10:44.381618       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 12:10:44.381898       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0122 12:10:44.483844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 12:10:44.484162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0122 12:10:44.488731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 12:10:44.489013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0122 12:10:47.631321       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0122 12:29:41.938619       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0122 12:29:41.939213       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0122 12:29:41.939460       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-22 12:31:19 UTC, ends at Mon 2024-01-22 12:38:12 UTC. --
	Jan 22 12:33:27 multinode-484200 kubelet[1473]: I0122 12:33:27.332350    1473 scope.go:117] "RemoveContainer" containerID="ad2a662b28c0fb6fc95898bfecacab7a7643f7bb6ad31d63ae978fffa2f8e4af"
	Jan 22 12:33:27 multinode-484200 kubelet[1473]: E0122 12:33:27.337064    1473 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(30187bc8-2b16-4ca8-944b-c71bfb25e36b)\"" pod="kube-system/storage-provisioner" podUID="30187bc8-2b16-4ca8-944b-c71bfb25e36b"
	Jan 22 12:33:39 multinode-484200 kubelet[1473]: I0122 12:33:39.053517    1473 scope.go:117] "RemoveContainer" containerID="ad2a662b28c0fb6fc95898bfecacab7a7643f7bb6ad31d63ae978fffa2f8e4af"
	Jan 22 12:33:47 multinode-484200 kubelet[1473]: E0122 12:33:47.103244    1473 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:33:47 multinode-484200 kubelet[1473]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:33:47 multinode-484200 kubelet[1473]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:33:47 multinode-484200 kubelet[1473]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:33:47 multinode-484200 kubelet[1473]: I0122 12:33:47.132643    1473 scope.go:117] "RemoveContainer" containerID="c38b8024525ee5f30f97a688d317982f46476c7c197f6d021ab875b59cf0e544"
	Jan 22 12:33:47 multinode-484200 kubelet[1473]: I0122 12:33:47.169306    1473 scope.go:117] "RemoveContainer" containerID="487e8e2356938bcd2d97848c25847e0912990d160633f6aa3e75a1ffeea967d1"
	Jan 22 12:34:47 multinode-484200 kubelet[1473]: E0122 12:34:47.101722    1473 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:34:47 multinode-484200 kubelet[1473]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:34:47 multinode-484200 kubelet[1473]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:34:47 multinode-484200 kubelet[1473]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:35:47 multinode-484200 kubelet[1473]: E0122 12:35:47.110574    1473 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:35:47 multinode-484200 kubelet[1473]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:35:47 multinode-484200 kubelet[1473]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:35:47 multinode-484200 kubelet[1473]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:36:47 multinode-484200 kubelet[1473]: E0122 12:36:47.105134    1473 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:36:47 multinode-484200 kubelet[1473]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:36:47 multinode-484200 kubelet[1473]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:36:47 multinode-484200 kubelet[1473]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 12:37:47 multinode-484200 kubelet[1473]: E0122 12:37:47.105717    1473 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 22 12:37:47 multinode-484200 kubelet[1473]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 12:37:47 multinode-484200 kubelet[1473]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 12:37:47 multinode-484200 kubelet[1473]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:38:03.603572    5168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-484200 -n multinode-484200
E0122 12:38:23.976349    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-484200 -n multinode-484200: (12.194013s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-484200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (539.15s)

                                                
                                    
x
+
TestPreload (281.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-302600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0122 12:43:23.970029    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:43:51.172894    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-302600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: exit status 90 (3m27.4901419s)

                                                
                                                
-- stdout --
	* [test-preload-302600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node test-preload-302600 in cluster test-preload-302600
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:40:43.593820   11664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0122 12:40:43.661266   11664 out.go:296] Setting OutFile to fd 720 ...
	I0122 12:40:43.661266   11664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:40:43.661266   11664 out.go:309] Setting ErrFile to fd 920...
	I0122 12:40:43.661266   11664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:40:43.684277   11664 out.go:303] Setting JSON to false
	I0122 12:40:43.687287   11664 start.go:128] hostinfo: {"hostname":"minikube7","uptime":46612,"bootTime":1705880631,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 12:40:43.687287   11664 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 12:40:43.688282   11664 out.go:177] * [test-preload-302600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 12:40:43.689279   11664 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 12:40:43.689279   11664 notify.go:220] Checking for updates...
	I0122 12:40:43.690292   11664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 12:40:43.690292   11664 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 12:40:43.691272   11664 out.go:177]   - MINIKUBE_LOCATION=18014
	I0122 12:40:43.692280   11664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 12:40:43.693274   11664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 12:40:49.075830   11664 out.go:177] * Using the hyperv driver based on user configuration
	I0122 12:40:49.076806   11664 start.go:298] selected driver: hyperv
	I0122 12:40:49.076806   11664 start.go:902] validating driver "hyperv" against <nil>
	I0122 12:40:49.076806   11664 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 12:40:49.125357   11664 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 12:40:49.126892   11664 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 12:40:49.127424   11664 cni.go:84] Creating CNI manager for ""
	I0122 12:40:49.127424   11664 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 12:40:49.127424   11664 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 12:40:49.127558   11664 start_flags.go:321] config:
	{Name:test-preload-302600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-302600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 12:40:49.127613   11664 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.129434   11664 out.go:177] * Starting control plane node test-preload-302600 in cluster test-preload-302600
	I0122 12:40:49.130408   11664 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0122 12:40:49.130466   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0122 12:40:49.130466   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4
	I0122 12:40:49.130466   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0
	I0122 12:40:49.131117   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7
	I0122 12:40:49.131117   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4
	I0122 12:40:49.130466   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4
	I0122 12:40:49.131117   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6
	I0122 12:40:49.130466   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4
	I0122 12:40:49.131117   11664 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\test-preload-302600\config.json ...
	I0122 12:40:49.131827   11664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\test-preload-302600\config.json: {Name:mk318650f454d1e68808f84c256aa3fda0bf7524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 12:40:49.133881   11664 start.go:365] acquiring machines lock for test-preload-302600: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 12:40:49.134393   11664 start.go:369] acquired machines lock for "test-preload-302600" in 409.4µs
	I0122 12:40:49.134704   11664 start.go:93] Provisioning new machine with config: &{Name:test-preload-302600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-302600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0122 12:40:49.134963   11664 start.go:125] createHost starting for "" (driver="hyperv")
	I0122 12:40:49.136281   11664 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 12:40:49.136831   11664 start.go:159] libmachine.API.Create for "test-preload-302600" (driver="hyperv")
	I0122 12:40:49.136882   11664 client.go:168] LocalClient.Create starting
	I0122 12:40:49.137889   11664 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0122 12:40:49.138258   11664 main.go:141] libmachine: Decoding PEM data...
	I0122 12:40:49.138258   11664 main.go:141] libmachine: Parsing certificate...
	I0122 12:40:49.138258   11664 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0122 12:40:49.138923   11664 main.go:141] libmachine: Decoding PEM data...
	I0122 12:40:49.138967   11664 main.go:141] libmachine: Parsing certificate...
	I0122 12:40:49.138967   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mk323820ff761a380a787761544315a8c79698f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mk17a1797fd85035a52b0fe73cc96031114fcdec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mk6184eea9b4e9ae990a75b6990b811a4a190d26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.327336   11664 cache.go:107] acquiring lock: {Name:mk5c5b833d3d5b3fa6da394ca2b338705203c957 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mk37763b8953b6775431ccbe0baf82178e2e0deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.326913   11664 cache.go:107] acquiring lock: {Name:mked9b25206e483c8eb228e3ce0429d0cc20b1b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.327336   11664 cache.go:107] acquiring lock: {Name:mk9187b34bf5aaccba80b0cd15d99c8332605601 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 12:40:49.331111   11664 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 12:40:49.332555   11664 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:40:49.332645   11664 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 12:40:49.332683   11664 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0122 12:40:49.332683   11664 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 12:40:49.332807   11664 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0122 12:40:49.332807   11664 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 12:40:49.333263   11664 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 12:40:49.351869   11664 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 12:40:49.352874   11664 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 12:40:49.352874   11664 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 12:40:49.357905   11664 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0122 12:40:49.357905   11664 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 12:40:49.369914   11664 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0122 12:40:49.379889   11664 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 12:40:49.388859   11664 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	W0122 12:40:49.454351   11664 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0122 12:40:49.561702   11664 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0122 12:40:49.670473   11664 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.8.6 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0122 12:40:49.780466   11664 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0122 12:40:49.888654   11664 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0122 12:40:49.900105   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	W0122 12:40:49.983189   11664 image.go:187] authn lookup for registry.k8s.io/pause:3.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0122 12:40:50.040098   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4
	I0122 12:40:50.046085   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6
	W0122 12:40:50.087423   11664 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0122 12:40:50.101590   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0122 12:40:50.102288   11664 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 971.8171ms
	I0122 12:40:50.102288   11664 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0122 12:40:50.104593   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0
	I0122 12:40:50.171733   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4
	W0122 12:40:50.183121   11664 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0122 12:40:50.224421   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7
	I0122 12:40:50.335202   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4
	I0122 12:40:50.425921   11664 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4
	I0122 12:40:50.492774   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7 exists
	I0122 12:40:50.492833   11664 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.7" took 1.3617094s
	I0122 12:40:50.492833   11664 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7 succeeded
	I0122 12:40:50.892271   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6 exists
	I0122 12:40:50.893264   11664 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.8.6" took 1.7621389s
	I0122 12:40:50.893264   11664 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6 succeeded
	I0122 12:40:51.129800   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4 exists
	I0122 12:40:51.130141   11664 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.24.4" took 1.9990143s
	I0122 12:40:51.130141   11664 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4 succeeded
	I0122 12:40:51.234335   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4 exists
	I0122 12:40:51.235017   11664 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.24.4" took 2.1045409s
	I0122 12:40:51.235017   11664 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4 succeeded
	I0122 12:40:51.624390   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4 exists
	I0122 12:40:51.624390   11664 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.24.4" took 2.4932616s
	I0122 12:40:51.624390   11664 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4 succeeded
	I0122 12:40:51.715631   11664 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0122 12:40:51.715631   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:40:51.715631   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0122 12:40:52.261451   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4 exists
	I0122 12:40:52.261979   11664 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.24.4" took 3.1303201s
	I0122 12:40:52.261979   11664 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4 succeeded
	I0122 12:40:52.694702   11664 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0 exists
	I0122 12:40:52.694702   11664 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.3-0" took 3.5642199s
	I0122 12:40:52.694702   11664 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0 succeeded
	I0122 12:40:52.694702   11664 cache.go:87] Successfully saved all images to host disk.
	I0122 12:40:53.579829   11664 main.go:141] libmachine: [stdout =====>] : False
	
	I0122 12:40:53.579829   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:40:53.579829   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:40:55.112675   11664 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:40:55.112815   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:40:55.112815   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:40:58.700457   11664 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:40:58.700457   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:40:58.703624   11664 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0122 12:40:59.183114   11664 main.go:141] libmachine: Creating SSH key...
	I0122 12:40:59.283097   11664 main.go:141] libmachine: Creating VM...
	I0122 12:40:59.283097   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0122 12:41:02.113055   11664 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0122 12:41:02.113314   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:02.113545   11664 main.go:141] libmachine: Using switch "Default Switch"
	I0122 12:41:02.113545   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0122 12:41:03.897618   11664 main.go:141] libmachine: [stdout =====>] : True
	
	I0122 12:41:03.897718   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:03.897718   11664 main.go:141] libmachine: Creating VHD
	I0122 12:41:03.897876   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0122 12:41:07.641387   11664 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D5C80F99-D324-4D0E-9E19-29874C1299CC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0122 12:41:07.641477   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:07.641477   11664 main.go:141] libmachine: Writing magic tar header
	I0122 12:41:07.641553   11664 main.go:141] libmachine: Writing SSH key tar header
	I0122 12:41:07.650959   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0122 12:41:10.808403   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:10.808403   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:10.808510   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\disk.vhd' -SizeBytes 20000MB
	I0122 12:41:13.376047   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:13.376047   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:13.376303   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM test-preload-302600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0122 12:41:16.873671   11664 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	test-preload-302600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0122 12:41:16.873841   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:16.873917   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName test-preload-302600 -DynamicMemoryEnabled $false
	I0122 12:41:19.093031   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:19.093031   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:19.093124   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor test-preload-302600 -Count 2
	I0122 12:41:21.269325   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:21.269325   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:21.269393   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName test-preload-302600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\boot2docker.iso'
	I0122 12:41:23.860394   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:23.860394   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:23.860632   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName test-preload-302600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\disk.vhd'
	I0122 12:41:26.469833   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:26.470054   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:26.470054   11664 main.go:141] libmachine: Starting VM...
	I0122 12:41:26.470054   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-302600
	I0122 12:41:29.366535   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:29.366691   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:29.366691   11664 main.go:141] libmachine: Waiting for host to start...
	I0122 12:41:29.366691   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:31.701771   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:31.702110   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:31.702213   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:41:34.236414   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:34.236414   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:35.242280   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:37.546302   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:37.546302   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:37.546302   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:41:40.251415   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:40.251449   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:41.255306   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:43.547802   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:43.547838   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:43.547900   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:41:46.092988   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:46.093052   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:47.107669   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:49.310087   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:49.310087   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:49.310153   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:41:51.838163   11664 main.go:141] libmachine: [stdout =====>] : 
	I0122 12:41:51.838364   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:52.839907   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:55.082618   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:55.082618   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:55.082618   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:41:57.667102   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:41:57.667102   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:57.667212   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:41:59.782846   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:41:59.782846   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:41:59.782846   11664 machine.go:88] provisioning docker machine ...
	I0122 12:41:59.783286   11664 buildroot.go:166] provisioning hostname "test-preload-302600"
	I0122 12:41:59.783331   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:01.960376   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:01.960570   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:01.960570   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:04.518378   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:04.518729   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:04.526408   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:04.527295   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:04.527295   11664 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-302600 && echo "test-preload-302600" | sudo tee /etc/hostname
	I0122 12:42:04.686944   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-302600
	
	I0122 12:42:04.687125   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:06.810319   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:06.810319   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:06.810621   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:09.344289   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:09.344289   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:09.350627   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:09.351420   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:09.351420   11664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-302600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-302600/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-302600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 12:42:09.499810   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 12:42:09.499810   11664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0122 12:42:09.499810   11664 buildroot.go:174] setting up certificates
	I0122 12:42:09.499810   11664 provision.go:83] configureAuth start
	I0122 12:42:09.499810   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:11.674448   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:11.674448   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:11.674555   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:14.197615   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:14.197615   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:14.197714   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:16.347994   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:16.347994   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:16.348110   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:18.860011   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:18.860132   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:18.860132   11664 provision.go:138] copyHostCerts
	I0122 12:42:18.860132   11664 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0122 12:42:18.860132   11664 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0122 12:42:18.861039   11664 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0122 12:42:18.862611   11664 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0122 12:42:18.862611   11664 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0122 12:42:18.863196   11664 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0122 12:42:18.864398   11664 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0122 12:42:18.864398   11664 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0122 12:42:18.864973   11664 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0122 12:42:18.865936   11664 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-302600 san=[172.23.189.164 172.23.189.164 localhost 127.0.0.1 minikube test-preload-302600]
	I0122 12:42:18.993299   11664 provision.go:172] copyRemoteCerts
	I0122 12:42:19.006576   11664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 12:42:19.006576   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:21.118928   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:21.119240   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:21.119240   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:23.678359   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:23.678359   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:23.679112   11664 sshutil.go:53] new ssh client: &{IP:172.23.189.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\id_rsa Username:docker}
	I0122 12:42:23.786476   11664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7798786s)
	I0122 12:42:23.787141   11664 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 12:42:23.823390   11664 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0122 12:42:23.860441   11664 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 12:42:23.901422   11664 provision.go:86] duration metric: configureAuth took 14.4015486s
	I0122 12:42:23.901495   11664 buildroot.go:189] setting minikube options for container-runtime
	I0122 12:42:23.902169   11664 config.go:182] Loaded profile config "test-preload-302600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0122 12:42:23.902235   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:26.029766   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:26.029766   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:26.029865   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:28.593329   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:28.593329   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:28.600214   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:28.600214   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:28.600214   11664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0122 12:42:28.741061   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0122 12:42:28.741061   11664 buildroot.go:70] root file system type: tmpfs
	I0122 12:42:28.741301   11664 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0122 12:42:28.741301   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:30.866784   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:30.866784   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:30.867148   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:33.439255   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:33.439506   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:33.445666   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:33.446358   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:33.446358   11664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0122 12:42:33.608510   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0122 12:42:33.608591   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:35.728667   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:35.728667   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:35.729001   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:38.290343   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:38.290596   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:38.296183   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:38.296864   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:38.296864   11664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0122 12:42:39.269103   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0122 12:42:39.269103   11664 machine.go:91] provisioned docker machine in 39.4857367s
	I0122 12:42:39.269103   11664 client.go:171] LocalClient.Create took 1m50.1316293s
	I0122 12:42:39.269103   11664 start.go:167] duration metric: libmachine.API.Create for "test-preload-302600" took 1m50.1317867s
	I0122 12:42:39.269103   11664 start.go:300] post-start starting for "test-preload-302600" (driver="hyperv")
	I0122 12:42:39.269103   11664 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 12:42:39.283529   11664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 12:42:39.283529   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:41.426736   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:41.426934   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:41.426934   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:43.989489   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:43.989489   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:43.989489   11664 sshutil.go:53] new ssh client: &{IP:172.23.189.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\id_rsa Username:docker}
	I0122 12:42:44.099093   11664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8154748s)
	I0122 12:42:44.116387   11664 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 12:42:44.126035   11664 info.go:137] Remote host: Buildroot 2021.02.12
	I0122 12:42:44.126035   11664 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0122 12:42:44.126035   11664 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0122 12:42:44.128197   11664 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem -> 63962.pem in /etc/ssl/certs
	I0122 12:42:44.143056   11664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 12:42:44.157765   11664 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\63962.pem --> /etc/ssl/certs/63962.pem (1708 bytes)
	I0122 12:42:44.203371   11664 start.go:303] post-start completed in 4.9342466s
	I0122 12:42:44.206724   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:46.353221   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:46.353221   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:46.353221   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:48.885227   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:48.885227   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:48.885604   11664 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\test-preload-302600\config.json ...
	I0122 12:42:48.888886   11664 start.go:128] duration metric: createHost completed in 1m59.7533962s
	I0122 12:42:48.889024   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:51.021088   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:51.021308   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:51.021308   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:53.582441   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:53.582441   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:53.589190   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:53.590092   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:53.590092   11664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 12:42:53.730376   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705927373.730449729
	
	I0122 12:42:53.730376   11664 fix.go:206] guest clock: 1705927373.730449729
	I0122 12:42:53.730489   11664 fix.go:219] Guest: 2024-01-22 12:42:53.730449729 +0000 UTC Remote: 2024-01-22 12:42:48.8888869 +0000 UTC m=+125.388895701 (delta=4.841562829s)
	I0122 12:42:53.730557   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:42:55.903713   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:42:55.903902   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:55.903902   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:42:58.476499   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:42:58.476499   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:42:58.482167   11664 main.go:141] libmachine: Using SSH client type: native
	I0122 12:42:58.482885   11664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x456240] 0x458d80 <nil>  [] 0s} 172.23.189.164 22 <nil> <nil>}
	I0122 12:42:58.482885   11664 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705927373
	I0122 12:42:58.633943   11664 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 22 12:42:53 UTC 2024
	
	I0122 12:42:58.633943   11664 fix.go:226] clock set: Mon Jan 22 12:42:53 UTC 2024
	 (err=<nil>)
	I0122 12:42:58.633943   11664 start.go:83] releasing machines lock for "test-preload-302600", held for 2m9.4988727s
	I0122 12:42:58.634166   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:43:00.734174   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:43:00.734174   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:00.734313   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:43:03.230283   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:43:03.230283   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:03.235555   11664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 12:43:03.235638   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:43:03.246877   11664 ssh_runner.go:195] Run: cat /version.json
	I0122 12:43:03.246877   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-302600 ).state
	I0122 12:43:05.460220   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:43:05.460337   11664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:43:05.460337   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:05.460337   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:05.460337   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:43:05.460440   11664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-302600 ).networkadapters[0]).ipaddresses[0]
	I0122 12:43:08.106003   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:43:08.106198   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:08.106651   11664 sshutil.go:53] new ssh client: &{IP:172.23.189.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\id_rsa Username:docker}
	I0122 12:43:08.128477   11664 main.go:141] libmachine: [stdout =====>] : 172.23.189.164
	
	I0122 12:43:08.128477   11664 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:43:08.128477   11664 sshutil.go:53] new ssh client: &{IP:172.23.189.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\test-preload-302600\id_rsa Username:docker}
	I0122 12:43:08.271766   11664 ssh_runner.go:235] Completed: cat /version.json: (5.0248666s)
	I0122 12:43:08.272830   11664 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0372533s)
	I0122 12:43:08.286352   11664 ssh_runner.go:195] Run: systemctl --version
	I0122 12:43:08.305722   11664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 12:43:08.312710   11664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 12:43:08.323716   11664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 12:43:08.347307   11664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 12:43:08.347307   11664 start.go:475] detecting cgroup driver to use...
	I0122 12:43:08.347520   11664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:43:08.386809   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0122 12:43:08.415278   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 12:43:08.431109   11664 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 12:43:08.443532   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 12:43:08.469998   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:43:08.501456   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 12:43:08.528675   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 12:43:08.558274   11664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 12:43:08.593088   11664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 12:43:08.621919   11664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 12:43:08.648878   11664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 12:43:08.677803   11664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:43:08.840402   11664 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 12:43:08.863766   11664 start.go:475] detecting cgroup driver to use...
	I0122 12:43:08.877264   11664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0122 12:43:08.909840   11664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:43:08.944580   11664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0122 12:43:08.990916   11664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0122 12:43:09.028028   11664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:43:09.063943   11664 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 12:43:09.122735   11664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 12:43:09.143323   11664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 12:43:09.186541   11664 ssh_runner.go:195] Run: which cri-dockerd
	I0122 12:43:09.204541   11664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0122 12:43:09.218472   11664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0122 12:43:09.254622   11664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0122 12:43:09.414787   11664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0122 12:43:09.571010   11664 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0122 12:43:09.571250   11664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0122 12:43:09.610994   11664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 12:43:09.767575   11664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0122 12:44:10.872855   11664 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1050115s)
	I0122 12:44:10.887096   11664 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0122 12:44:10.915764   11664 out.go:177] 
	W0122 12:44:10.917014   11664 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 12:41:48 UTC, ends at Mon 2024-01-22 12:44:10 UTC. --
	Jan 22 12:42:38 test-preload-302600 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.845204060Z" level=info msg="Starting up"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.846259272Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.847342285Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.882600197Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.906305574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.906501476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.908889504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.908984405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909580712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909690313Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909829015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910038718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910169219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910303821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910706625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910818427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910835827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911013029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911184331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911277932Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911405733Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922689165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922810067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922832267Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922866467Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922884068Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922895768Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922910168Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923068770Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923230972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923254772Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923270072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923286472Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923304072Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923317773Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923330673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923361173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923376273Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923389473Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923450974Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923570576Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924289784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924408885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924478186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924519987Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924574087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924668688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924689789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924702489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924715589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924729489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924748789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924763390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924777790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924833390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924930191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924955092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924980692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924996992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925015292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925028793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925041293Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925057293Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925069793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925120994Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925533799Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925671500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925809602Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925905203Z" level=info msg="containerd successfully booted in 0.047614s"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.971344334Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.989069441Z" level=info msg="Loading containers: start."
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.185456600Z" level=info msg="Loading containers: done."
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201753079Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201829180Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201841680Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201849380Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201866980Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201957981Z" level=info msg="Daemon has completed initialization"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.266309686Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 12:42:39 test-preload-302600 systemd[1]: Started Docker Application Container Engine.
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.269727123Z" level=info msg="API listen on [::]:2376"
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.787722849Z" level=info msg="Processing signal 'terminated'"
	Jan 22 12:43:09 test-preload-302600 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789373849Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789506549Z" level=info msg="Daemon shutdown complete"
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789693949Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789703249Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 12:43:10 test-preload-302600 systemd[1]: docker.service: Succeeded.
	Jan 22 12:43:10 test-preload-302600 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 12:43:10 test-preload-302600 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:43:10 test-preload-302600 dockerd[1009]: time="2024-01-22T12:43:10.864039649Z" level=info msg="Starting up"
	Jan 22 12:44:10 test-preload-302600 dockerd[1009]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 12:44:10 test-preload-302600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 12:44:10 test-preload-302600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 12:44:10 test-preload-302600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-22 12:41:48 UTC, ends at Mon 2024-01-22 12:44:10 UTC. --
	Jan 22 12:42:38 test-preload-302600 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.845204060Z" level=info msg="Starting up"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.846259272Z" level=info msg="containerd not running, starting managed containerd"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.847342285Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.882600197Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.906305574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.906501476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.908889504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.908984405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909580712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909690313Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.909829015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910038718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910169219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910303821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910706625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910818427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.910835827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911013029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911184331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911277932Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.911405733Z" level=info msg="metadata content store policy set" policy=shared
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922689165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922810067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922832267Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922866467Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922884068Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922895768Z" level=info msg="NRI interface is disabled by configuration."
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.922910168Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923068770Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923230972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923254772Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923270072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923286472Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923304072Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923317773Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923330673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923361173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923376273Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923389473Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923450974Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.923570576Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924289784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924408885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924478186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924519987Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924574087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924668688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924689789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924702489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924715589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924729489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924748789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924763390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924777790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924833390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924930191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924955092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924980692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.924996992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925015292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925028793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925041293Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925057293Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925069793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925120994Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925533799Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925671500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925809602Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 22 12:42:38 test-preload-302600 dockerd[679]: time="2024-01-22T12:42:38.925905203Z" level=info msg="containerd successfully booted in 0.047614s"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.971344334Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 22 12:42:38 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:38.989069441Z" level=info msg="Loading containers: start."
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.185456600Z" level=info msg="Loading containers: done."
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201753079Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201829180Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201841680Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201849380Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201866980Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.201957981Z" level=info msg="Daemon has completed initialization"
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.266309686Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 22 12:42:39 test-preload-302600 systemd[1]: Started Docker Application Container Engine.
	Jan 22 12:42:39 test-preload-302600 dockerd[673]: time="2024-01-22T12:42:39.269727123Z" level=info msg="API listen on [::]:2376"
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.787722849Z" level=info msg="Processing signal 'terminated'"
	Jan 22 12:43:09 test-preload-302600 systemd[1]: Stopping Docker Application Container Engine...
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789373849Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789506549Z" level=info msg="Daemon shutdown complete"
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789693949Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 22 12:43:09 test-preload-302600 dockerd[673]: time="2024-01-22T12:43:09.789703249Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 22 12:43:10 test-preload-302600 systemd[1]: docker.service: Succeeded.
	Jan 22 12:43:10 test-preload-302600 systemd[1]: Stopped Docker Application Container Engine.
	Jan 22 12:43:10 test-preload-302600 systemd[1]: Starting Docker Application Container Engine...
	Jan 22 12:43:10 test-preload-302600 dockerd[1009]: time="2024-01-22T12:43:10.864039649Z" level=info msg="Starting up"
	Jan 22 12:44:10 test-preload-302600 dockerd[1009]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 22 12:44:10 test-preload-302600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 22 12:44:10 test-preload-302600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 22 12:44:10 test-preload-302600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0122 12:44:10.917178   11664 out.go:239] * 
	* 
	W0122 12:44:10.918489   11664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 12:44:10.919572   11664 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-windows-amd64.exe start -p test-preload-302600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4 failed: exit status 90
panic.go:523: *** TestPreload FAILED at 2024-01-22 12:44:11.1724212 +0000 UTC m=+7690.285735201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-302600 -n test-preload-302600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-302600 -n test-preload-302600: exit status 6 (12.0233382s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:44:11.293455    6556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0122 12:44:23.115311    6556 status.go:415] kubeconfig endpoint: extract IP: "test-preload-302600" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-302600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-302600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-302600
E0122 12:44:47.220133    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-302600: (1m1.5944321s)
--- FAIL: TestPreload (281.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-487400 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-487400 --driver=hyperv: exit status 1 (4m59.7302559s)

                                                
                                                
-- stdout --
	* [NoKubernetes-487400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-487400 in cluster NoKubernetes-487400
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:50:48.770647    8532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-487400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-487400 -n NoKubernetes-487400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-487400 -n NoKubernetes-487400: exit status 7 (2.6805044s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:55:48.478035    3192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-487400" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.41s)

                                                
                                    

Test pass (117/156)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.61
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
9 TestDownloadOnly/v1.16.0/DeleteAll 1.27
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.24
12 TestDownloadOnly/v1.28.4/json-events 11.69
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.43
18 TestDownloadOnly/v1.28.4/DeleteAll 1.34
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.26
21 TestDownloadOnly/v1.29.0-rc.2/json-events 11.18
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.26
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.25
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.21
30 TestBinaryMirror 7.22
31 TestOffline 252.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
36 TestAddons/Setup 385.71
39 TestAddons/parallel/Ingress 65.61
40 TestAddons/parallel/InspektorGadget 26.65
41 TestAddons/parallel/MetricsServer 22
42 TestAddons/parallel/HelmTiller 28.75
44 TestAddons/parallel/CSI 109.99
45 TestAddons/parallel/Headlamp 37.85
46 TestAddons/parallel/CloudSpanner 21.53
47 TestAddons/parallel/LocalPath 32.15
48 TestAddons/parallel/NvidiaDevicePlugin 20.91
49 TestAddons/parallel/Yakd 6.02
52 TestAddons/serial/GCPAuth/Namespaces 0.36
53 TestAddons/StoppedEnableDisable 47.71
54 TestCertOptions 446.52
55 TestCertExpiration 957.71
56 TestDockerFlags 473.28
58 TestForceSystemdEnv 404.74
65 TestErrorSpam/start 17.65
66 TestErrorSpam/status 37.24
67 TestErrorSpam/pause 23.15
68 TestErrorSpam/unpause 23.27
69 TestErrorSpam/stop 52.29
72 TestFunctional/serial/CopySyncFile 0.04
73 TestFunctional/serial/StartWithProxy 233.46
74 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/KubeContext 0.16
80 TestFunctional/serial/CacheCmd/cache/add_remote 348.81
81 TestFunctional/serial/CacheCmd/cache/add_local 60.83
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
83 TestFunctional/serial/CacheCmd/cache/list 0.27
86 TestFunctional/serial/CacheCmd/cache/delete 0.55
91 TestFunctional/serial/LogsCmd 108.28
94 TestFunctional/delete_addon-resizer_images 0.02
95 TestFunctional/delete_my-image_image 0.01
96 TestFunctional/delete_minikube_cached_images 0.02
100 TestImageBuild/serial/Setup 190.63
101 TestImageBuild/serial/NormalBuild 9.06
102 TestImageBuild/serial/BuildWithBuildArg 8.84
103 TestImageBuild/serial/BuildWithDockerIgnore 7.71
104 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.59
107 TestIngressAddonLegacy/StartLegacyK8sCluster 243
109 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 40.51
110 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.6
111 TestIngressAddonLegacy/serial/ValidateIngressAddons 82.08
114 TestJSONOutput/start/Command 234.18
115 TestJSONOutput/start/Audit 0
117 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
118 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
120 TestJSONOutput/pause/Command 7.93
121 TestJSONOutput/pause/Audit 0
123 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
124 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
126 TestJSONOutput/unpause/Command 7.79
127 TestJSONOutput/unpause/Audit 0
129 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
130 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
132 TestJSONOutput/stop/Command 28.96
133 TestJSONOutput/stop/Audit 0
135 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
137 TestErrorJSONOutput 1.53
142 TestMainNoArgs 0.25
143 TestMinikubeProfile 494.52
146 TestMountStart/serial/StartWithMountFirst 148.08
147 TestMountStart/serial/VerifyMountFirst 9.61
148 TestMountStart/serial/StartWithMountSecond 148.52
149 TestMountStart/serial/VerifyMountSecond 9.6
150 TestMountStart/serial/DeleteFirst 25.91
151 TestMountStart/serial/VerifyMountPostDelete 9.5
152 TestMountStart/serial/Stop 21.39
153 TestMountStart/serial/RestartStopped 111.22
154 TestMountStart/serial/VerifyMountPostStop 9.57
157 TestMultiNode/serial/FreshStart2Nodes 407.13
158 TestMultiNode/serial/DeployApp2Nodes 9.25
160 TestMultiNode/serial/AddNode 215.06
161 TestMultiNode/serial/MultiNodeLabels 0.2
162 TestMultiNode/serial/ProfileList 7.69
163 TestMultiNode/serial/CopyFile 363.94
164 TestMultiNode/serial/StopNode 66.49
165 TestMultiNode/serial/StartAfterStop 172.36
171 TestScheduledStopWindows 323.5
176 TestRunningBinaryUpgrade 1043.69
178 TestKubernetesUpgrade 1311.45
181 TestNoKubernetes/serial/StartNoK8sWithVersion 0.33
201 TestStoppedBinaryUpgrade/Setup 0.94
202 TestStoppedBinaryUpgrade/Upgrade 648.27
204 TestPause/serial/Start 391.74
205 TestPause/serial/SecondStartNoReconfiguration 349.49
207 TestStoppedBinaryUpgrade/MinikubeLogs 9.54
209 TestPause/serial/Pause 9.45
211 TestPause/serial/VerifyStatus 12.53
212 TestPause/serial/Unpause 8.05
213 TestPause/serial/PauseAgain 8.15
214 TestPause/serial/DeletePaused 36.61
215 TestPause/serial/VerifyDeletedResources 7.94
x
+
TestDownloadOnly/v1.16.0/json-events (14.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-879800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-879800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (14.6047735s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-879800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-879800: exit status 85 (295.6146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |          |
	|         | -p download-only-879800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:36:01
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:36:01.159343    5644 out.go:296] Setting OutFile to fd 588 ...
	I0122 10:36:01.160473    5644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:01.160473    5644 out.go:309] Setting ErrFile to fd 592...
	I0122 10:36:01.160473    5644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0122 10:36:01.177456    5644 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0122 10:36:01.188063    5644 out.go:303] Setting JSON to true
	I0122 10:36:01.191248    5644 start.go:128] hostinfo: {"hostname":"minikube7","uptime":39130,"bootTime":1705880631,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:36:01.191248    5644 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:36:01.193719    5644 out.go:97] [download-only-879800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:36:01.193803    5644 notify.go:220] Checking for updates...
	I0122 10:36:01.195190    5644 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	W0122 10:36:01.193803    5644 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0122 10:36:01.196109    5644 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:36:01.196697    5644 out.go:169] MINIKUBE_LOCATION=18014
	I0122 10:36:01.197393    5644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0122 10:36:01.199732    5644 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 10:36:01.200555    5644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:36:06.949718    5644 out.go:97] Using the hyperv driver based on user configuration
	I0122 10:36:06.949881    5644 start.go:298] selected driver: hyperv
	I0122 10:36:06.949881    5644 start.go:902] validating driver "hyperv" against <nil>
	I0122 10:36:06.949881    5644 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 10:36:07.003782    5644 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0122 10:36:07.005261    5644 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 10:36:07.005349    5644 cni.go:84] Creating CNI manager for ""
	I0122 10:36:07.005349    5644 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0122 10:36:07.005349    5644 start_flags.go:321] config:
	{Name:download-only-879800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-879800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:36:07.006669    5644 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:36:07.007421    5644 out.go:97] Downloading VM boot image ...
	I0122 10:36:07.008091    5644 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1703784139-17866-amd64.iso
	I0122 10:36:10.314120    5644 out.go:97] Starting control plane node download-only-879800 in cluster download-only-879800
	I0122 10:36:10.314120    5644 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0122 10:36:10.347736    5644 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0122 10:36:10.348166    5644 cache.go:56] Caching tarball of preloaded images
	I0122 10:36:10.349106    5644 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0122 10:36:10.349867    5644 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0122 10:36:10.349867    5644 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0122 10:36:10.417663    5644 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-879800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:36:15.763244    6872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2739545s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-879800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-879800: (1.2427262s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (11.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-789300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-789300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (11.6860902s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (11.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-789300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-789300: exit status 85 (426.02ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-879800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-879800        | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | -o=json --download-only        | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-789300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:36:18
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:36:18.641655    9020 out.go:296] Setting OutFile to fd 628 ...
	I0122 10:36:18.642280    9020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:18.642280    9020 out.go:309] Setting ErrFile to fd 668...
	I0122 10:36:18.642421    9020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:18.666987    9020 out.go:303] Setting JSON to true
	I0122 10:36:18.670636    9020 start.go:128] hostinfo: {"hostname":"minikube7","uptime":39147,"bootTime":1705880631,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:36:18.670636    9020 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:36:18.671792    9020 out.go:97] [download-only-789300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:36:18.672975    9020 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:36:18.671792    9020 notify.go:220] Checking for updates...
	I0122 10:36:18.673545    9020 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:36:18.674545    9020 out.go:169] MINIKUBE_LOCATION=18014
	I0122 10:36:18.675158    9020 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0122 10:36:18.676920    9020 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 10:36:18.678002    9020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:36:24.308515    9020 out.go:97] Using the hyperv driver based on user configuration
	I0122 10:36:24.309090    9020 start.go:298] selected driver: hyperv
	I0122 10:36:24.309090    9020 start.go:902] validating driver "hyperv" against <nil>
	I0122 10:36:24.309294    9020 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 10:36:24.362211    9020 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0122 10:36:24.363418    9020 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 10:36:24.363418    9020 cni.go:84] Creating CNI manager for ""
	I0122 10:36:24.363418    9020 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:36:24.363418    9020 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 10:36:24.363418    9020 start_flags.go:321] config:
	{Name:download-only-789300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-789300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:36:24.364626    9020 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:36:24.365692    9020 out.go:97] Starting control plane node download-only-789300 in cluster download-only-789300
	I0122 10:36:24.365692    9020 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:36:24.401063    9020 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:36:24.401490    9020 cache.go:56] Caching tarball of preloaded images
	I0122 10:36:24.402061    9020 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0122 10:36:24.402772    9020 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0122 10:36:24.402966    9020 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0122 10:36:24.460692    9020 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0122 10:36:27.919740    9020 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0122 10:36:27.921012    9020 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-789300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:36:30.255191    5472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3373602s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-789300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-789300: (1.2589911s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-792600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-792600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (11.1823357s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-792600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-792600: exit status 85 (262.6826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-879800           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-879800           | download-only-879800 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | -o=json --download-only           | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-789300           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| delete  | -p download-only-789300           | download-only-789300 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC | 22 Jan 24 10:36 UTC |
	| start   | -o=json --download-only           | download-only-792600 | minikube7\jenkins | v1.32.0 | 22 Jan 24 10:36 UTC |                     |
	|         | -p download-only-792600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/22 10:36:33
	Running on machine: minikube7
	Binary: Built with gc go1.21.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 10:36:33.357717    7544 out.go:296] Setting OutFile to fd 736 ...
	I0122 10:36:33.358383    7544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:33.358383    7544 out.go:309] Setting ErrFile to fd 716...
	I0122 10:36:33.358383    7544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 10:36:33.380345    7544 out.go:303] Setting JSON to true
	I0122 10:36:33.384262    7544 start.go:128] hostinfo: {"hostname":"minikube7","uptime":39162,"bootTime":1705880631,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0122 10:36:33.384262    7544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0122 10:36:33.385826    7544 out.go:97] [download-only-792600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0122 10:36:33.385826    7544 notify.go:220] Checking for updates...
	I0122 10:36:33.386719    7544 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0122 10:36:33.387394    7544 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0122 10:36:33.388076    7544 out.go:169] MINIKUBE_LOCATION=18014
	I0122 10:36:33.389200    7544 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0122 10:36:33.390824    7544 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 10:36:33.391503    7544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0122 10:36:38.845967    7544 out.go:97] Using the hyperv driver based on user configuration
	I0122 10:36:38.845967    7544 start.go:298] selected driver: hyperv
	I0122 10:36:38.845967    7544 start.go:902] validating driver "hyperv" against <nil>
	I0122 10:36:38.845967    7544 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0122 10:36:38.894586    7544 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0122 10:36:38.895249    7544 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 10:36:38.895249    7544 cni.go:84] Creating CNI manager for ""
	I0122 10:36:38.895249    7544 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0122 10:36:38.895249    7544 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 10:36:38.895766    7544 start_flags.go:321] config:
	{Name:download-only-792600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-792600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0122 10:36:38.895891    7544 iso.go:125] acquiring lock: {Name:mkdec6f26395f187031104148d2e41f41e1ae42b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 10:36:38.897140    7544 out.go:97] Starting control plane node download-only-792600 in cluster download-only-792600
	I0122 10:36:38.897140    7544 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0122 10:36:38.938957    7544 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0122 10:36:38.938957    7544 cache.go:56] Caching tarball of preloaded images
	I0122 10:36:38.940051    7544 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0122 10:36:38.941082    7544 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0122 10:36:38.941082    7544 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0122 10:36:39.012313    7544 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0122 10:36:42.390515    7544 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0122 10:36:42.391579    7544 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-792600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:36:44.457377    6868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2477496s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-792600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-792600: (1.2123822s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.21s)

                                                
                                    
x
+
TestBinaryMirror (7.22s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-350600 --alsologtostderr --binary-mirror http://127.0.0.1:54978 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-350600 --alsologtostderr --binary-mirror http://127.0.0.1:54978 --driver=hyperv: (6.3260906s)
helpers_test.go:175: Cleaning up "binary-mirror-350600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-350600
--- PASS: TestBinaryMirror (7.22s)

                                                
                                    
x
+
TestOffline (252.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-942800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-942800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m30.1007196s)
helpers_test.go:175: Cleaning up "offline-docker-942800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-942800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-942800: (42.2706062s)
--- PASS: TestOffline (252.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-760600
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-760600: exit status 85 (273.8334ms)

                                                
                                                
-- stdout --
	* Profile "addons-760600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:36:58.046516   11376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-760600
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-760600: exit status 85 (293.4143ms)

                                                
                                                
-- stdout --
	* Profile "addons-760600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 10:36:58.045519    3248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (385.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-760600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-760600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m25.7098739s)
--- PASS: TestAddons/Setup (385.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-760600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-760600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-760600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a78ae939-0517-49e7-9b96-df087d7a48a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a78ae939-0517-49e7-9b96-df087d7a48a5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.0204628s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.5480426s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-760600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0122 10:44:47.707595   10248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-760600 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 ip: (3.0129599s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.23.176.254
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable ingress-dns --alsologtostderr -v=1: (15.705738s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable ingress --alsologtostderr -v=1: (21.8780817s)
--- PASS: TestAddons/parallel/Ingress (65.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ntxzr" [3882ea5e-b242-4ef6-b1c8-ecc83c0a05ff] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0226471s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-760600
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-760600: (21.6253618s)
--- PASS: TestAddons/parallel/InspektorGadget (26.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 36.4857ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-kqsvz" [197e029d-fbd7-4717-b6c4-fcddcae24cff] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0214458s
addons_test.go:415: (dbg) Run:  kubectl --context addons-760600 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable metrics-server --alsologtostderr -v=1: (16.7449304s)
--- PASS: TestAddons/parallel/MetricsServer (22.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.75s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 8.5691ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-lxkrx" [8a6fb3e0-d1b7-42f3-a860-a1f9da93bb4b] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0084306s
addons_test.go:473: (dbg) Run:  kubectl --context addons-760600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-760600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.0174776s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable helm-tiller --alsologtostderr -v=1: (16.698799s)
--- PASS: TestAddons/parallel/HelmTiller (28.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (109.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 37.8474ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-760600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-760600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [be13c61d-5512-42a2-be90-c2a7f0543287] Pending
helpers_test.go:344: "task-pv-pod" [be13c61d-5512-42a2-be90-c2a7f0543287] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [be13c61d-5512-42a2-be90-c2a7f0543287] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.0148185s
addons_test.go:584: (dbg) Run:  kubectl --context addons-760600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-760600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-760600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-760600 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-760600 delete pod task-pv-pod: (1.4282903s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-760600 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-760600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-760600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ac80a79e-e436-44c7-9a5e-07c13d286f97] Pending
helpers_test.go:344: "task-pv-pod-restore" [ac80a79e-e436-44c7-9a5e-07c13d286f97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ac80a79e-e436-44c7-9a5e-07c13d286f97] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.0199956s
addons_test.go:626: (dbg) Run:  kubectl --context addons-760600 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-760600 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-760600 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.4001507s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable volumesnapshots --alsologtostderr -v=1: (16.5422969s)
--- PASS: TestAddons/parallel/CSI (109.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-760600 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-760600 --alsologtostderr -v=1: (16.8305706s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-wfpbq" [9edaf3a8-3410-4050-9173-2d8eb81ee652] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-wfpbq" [9edaf3a8-3410-4050-9173-2d8eb81ee652] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-wfpbq" [9edaf3a8-3410-4050-9173-2d8eb81ee652] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.0156415s
--- PASS: TestAddons/parallel/Headlamp (37.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-tcpfc" [2ede04f9-dbc4-4c94-9983-e4554447d6eb] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0077045s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-760600
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-760600: (16.516258s)
--- PASS: TestAddons/parallel/CloudSpanner (21.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-760600 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-760600 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [684f9d12-a1d9-43f9-9b9d-d1ba4e2942fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [684f9d12-a1d9-43f9-9b9d-d1ba4e2942fd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [684f9d12-a1d9-43f9-9b9d-d1ba4e2942fd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.009469s
addons_test.go:891: (dbg) Run:  kubectl --context addons-760600 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 ssh "cat /opt/local-path-provisioner/pvc-957c413a-deb5-4049-8cba-7fb764975890_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 ssh "cat /opt/local-path-provisioner/pvc-957c413a-deb5-4049-8cba-7fb764975890_default_test-pvc/file1": (10.9896572s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-760600 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-760600 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-760600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-760600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1873742s)
--- PASS: TestAddons/parallel/LocalPath (32.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qf96s" [b9bc72da-3e91-48d1-8330-05aedba39efe] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0101216s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-760600
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-760600: (15.8952796s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-brt5f" [0f35bb61-3638-4160-97c1-0cd75e98e941] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0203764s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-760600 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-760600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (47.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-760600
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-760600: (35.7074849s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-760600
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-760600: (4.8120624s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-760600
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-760600: (4.6440004s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-760600
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-760600: (2.5494248s)
--- PASS: TestAddons/StoppedEnableDisable (47.71s)

                                                
                                    
x
+
TestCertOptions (446.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-806400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-806400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m25.885606s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-806400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-806400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (11.190041s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-806400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-806400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-806400 -- "sudo cat /etc/kubernetes/admin.conf": (9.9847254s)
helpers_test.go:175: Cleaning up "cert-options-806400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-806400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-806400: (39.2971456s)
--- PASS: TestCertOptions (446.52s)

                                                
                                    
x
+
TestCertExpiration (957.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-408000 --memory=2048 --cert-expiration=3m --driver=hyperv
E0122 12:58:51.185443    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-408000 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m57.302494s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-408000 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0122 13:08:51.178036    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-408000 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m16.8216015s)
helpers_test.go:175: Cleaning up "cert-expiration-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-408000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-408000: (43.5680332s)
--- PASS: TestCertExpiration (957.71s)

                                                
                                    
x
+
TestDockerFlags (473.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-997300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0122 13:03:23.983250    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-997300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m56.0261754s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-997300 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-997300 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.0199188s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-997300 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-997300 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.0976196s)
helpers_test.go:175: Cleaning up "docker-flags-997300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-997300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-997300: (37.1372822s)
--- PASS: TestDockerFlags (473.28s)

                                                
                                    
x
+
TestForceSystemdEnv (404.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-833700 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-833700 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m59.5348289s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-833700 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-833700 ssh "docker info --format {{.CgroupDriver}}": (10.409335s)
helpers_test.go:175: Cleaning up "force-systemd-env-833700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-833700
E0122 13:01:27.239786    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-833700: (34.7982345s)
--- PASS: TestForceSystemdEnv (404.74s)

                                                
                                    
x
+
TestErrorSpam/start (17.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run: (5.7934761s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run
E0122 10:51:07.918831    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run: (5.9116437s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 start --dry-run: (5.9453132s)
--- PASS: TestErrorSpam/start (17.65s)

                                                
                                    
x
+
TestErrorSpam/status (37.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status: (12.8352413s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status: (12.1770777s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 status: (12.2253552s)
--- PASS: TestErrorSpam/status (37.24s)

                                                
                                    
x
+
TestErrorSpam/pause (23.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause: (7.882457s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause: (7.7253072s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 pause: (7.5409134s)
--- PASS: TestErrorSpam/pause (23.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause: (7.8686632s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause: (7.7435038s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 unpause: (7.6573925s)
--- PASS: TestErrorSpam/unpause (23.27s)

                                                
                                    
x
+
TestErrorSpam/stop (52.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop: (34.4471748s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop: (9.0961277s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop
E0122 10:53:23.937202    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-526800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-526800 stop: (8.7450829s)
--- PASS: TestErrorSpam/stop (52.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\6396\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (233.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-969800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0122 10:53:51.767312    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-969800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m53.449214s)
--- PASS: TestFunctional/serial/StartWithProxy (233.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (348.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:3.1: (1m47.802314s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:3.3
E0122 11:08:23.955695    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:3.3: (2m0.5132894s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 cache add registry.k8s.io/pause:latest: (2m0.4899955s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (348.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-969800 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3719427228\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-969800 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3719427228\001: (1.4957818s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache add minikube-local-cache-test:functional-969800
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 cache add minikube-local-cache-test:functional-969800: (58.8409912s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 cache delete minikube-local-cache-test:functional-969800
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-969800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (108.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-969800 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-969800 logs: (1m48.2794579s)
--- PASS: TestFunctional/serial/LogsCmd (108.28s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-969800
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-969800: context deadline exceeded (108.8µs)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:functional-969800" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-969800": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-969800
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-969800: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-969800": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-969800
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-969800: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-969800": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (190.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-976600 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-976600 --driver=hyperv: (3m10.6293472s)
--- PASS: TestImageBuild/serial/Setup (190.63s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-976600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-976600: (9.0585299s)
--- PASS: TestImageBuild/serial/NormalBuild (9.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-976600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-976600: (8.840608s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-976600
E0122 11:38:07.164869    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-976600: (7.7127616s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-976600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-976600: (7.5888345s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (243s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-671200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-671200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (4m3.0029928s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (243.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (40.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons enable ingress --alsologtostderr -v=5
E0122 11:43:23.960623    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons enable ingress --alsologtostderr -v=5: (40.5130899s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (40.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons enable ingress-dns --alsologtostderr -v=5: (14.6018549s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (82.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-671200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-671200 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-671200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a26edc16-236e-4b59-afca-39a9d42e59d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a26edc16-236e-4b59-afca-39a9d42e59d2] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 30.0163284s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.6249153s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0122 11:44:22.760254   10900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-671200 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 ip: (2.5693777s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.23.178.211
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons disable ingress-dns --alsologtostderr -v=1: (16.0433078s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-671200 addons disable ingress --alsologtostderr -v=1: (21.5542383s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (82.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (234.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-265300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0122 11:48:23.956145    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 11:48:51.163169    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.178292    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.194897    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.226971    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.274169    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.370168    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.544357    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:51.880187    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:52.532257    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:53.822817    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:48:56.394980    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:49:01.531032    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:49:11.777684    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:49:32.262204    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-265300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m54.1839378s)
--- PASS: TestJSONOutput/start/Command (234.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.93s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-265300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-265300 --output=json --user=testUser: (7.9309979s)
--- PASS: TestJSONOutput/pause/Command (7.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.79s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-265300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-265300 --output=json --user=testUser: (7.7929949s)
--- PASS: TestJSONOutput/unpause/Command (7.79s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (28.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-265300 --output=json --user=testUser
E0122 11:50:13.227527    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-265300 --output=json --user=testUser: (28.9576971s)
--- PASS: TestJSONOutput/stop/Command (28.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.53s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-672000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-672000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (266.0045ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1125313e-8c2d-4552-9655-6e8dfcde4670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-672000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3bbf23f-0397-4ba0-9e07-887b9246cda1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"bbcc2f6d-a1d0-4cd7-a7f2-a3bba8a2eb56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f1df5ca-9415-44d7-a165-a938372f6e4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5f7bfea8-346c-4321-a0e1-b9fc2d3975a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18014"}}
	{"specversion":"1.0","id":"d0c68077-2136-44fa-b41d-b319257380ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b918df26-0e26-44b5-b77f-47d985857a24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 11:50:51.552634    2508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-672000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-672000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-672000: (1.2605351s)
--- PASS: TestErrorJSONOutput (1.53s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (494.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-815300 --driver=hyperv
E0122 11:51:35.164379    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:53:23.966271    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 11:53:51.159213    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-815300 --driver=hyperv: (3m13.203075s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-815300 --driver=hyperv
E0122 11:54:19.008156    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 11:54:47.184069    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-815300 --driver=hyperv: (3m12.2782322s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-815300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.8733604s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-815300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.767755s)
helpers_test.go:175: Cleaning up "second-815300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-815300
E0122 11:58:23.966984    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-815300: (35.9213594s)
helpers_test.go:175: Cleaning up "first-815300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-815300
E0122 11:58:51.171912    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-815300: (42.5092834s)
--- PASS: TestMinikubeProfile (494.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (148.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-524900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-524900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m27.0769276s)
--- PASS: TestMountStart/serial/StartWithMountFirst (148.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-524900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-524900 ssh -- ls /minikube-host: (9.6067613s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (148.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-524900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0122 12:03:23.956912    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:03:51.173818    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-524900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m27.5161724s)
--- PASS: TestMountStart/serial/StartWithMountSecond (148.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host: (9.6038304s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.60s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-524900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-524900 --alsologtostderr -v=5: (25.9129264s)
--- PASS: TestMountStart/serial/DeleteFirst (25.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host: (9.4953959s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.50s)

                                                
                                    
x
+
TestMountStart/serial/Stop (21.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-524900
E0122 12:05:14.375484    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-524900: (21.3878478s)
--- PASS: TestMountStart/serial/Stop (21.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (111.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-524900
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-524900: (1m50.2160579s)
--- PASS: TestMountStart/serial/RestartStopped (111.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.57s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-524900 ssh -- ls /minikube-host: (9.5665422s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.57s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (407.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-484200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0122 12:08:23.962294    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:08:51.170775    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
E0122 12:11:27.190782    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:13:23.965420    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:13:51.175483    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-484200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m23.0280809s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr: (24.1044409s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (407.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- rollout status deployment/busybox: (3.1593959s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- nslookup kubernetes.io: (1.7609132s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-flgvf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-484200 -- exec busybox-5b5d89c9d6-r46hz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.25s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (215.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-484200 -v 3 --alsologtostderr
E0122 12:18:23.973764    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-484200 -v 3 --alsologtostderr: (2m59.1595332s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr
E0122 12:18:51.168743    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr: (35.8988807s)
--- PASS: TestMultiNode/serial/AddNode (215.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-484200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.6898984s)
--- PASS: TestMultiNode/serial/ProfileList (7.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (363.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 status --output json --alsologtostderr: (35.946542s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200:/home/docker/cp-test.txt: (9.5343998s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt": (9.5090216s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200.txt: (9.4972939s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt": (9.4773325s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt multinode-484200-m02:/home/docker/cp-test_multinode-484200_multinode-484200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt multinode-484200-m02:/home/docker/cp-test_multinode-484200_multinode-484200-m02.txt: (16.5961197s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt": (9.4714657s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test_multinode-484200_multinode-484200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test_multinode-484200_multinode-484200-m02.txt": (9.5002031s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt multinode-484200-m03:/home/docker/cp-test_multinode-484200_multinode-484200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200:/home/docker/cp-test.txt multinode-484200-m03:/home/docker/cp-test_multinode-484200_multinode-484200-m03.txt: (16.5594299s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test.txt": (9.5526038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test_multinode-484200_multinode-484200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test_multinode-484200_multinode-484200-m03.txt": (9.4935762s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200-m02:/home/docker/cp-test.txt
E0122 12:21:54.383570    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200-m02:/home/docker/cp-test.txt: (9.6255606s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt": (9.6538673s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m02.txt: (9.5764594s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt": (9.5466118s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt multinode-484200:/home/docker/cp-test_multinode-484200-m02_multinode-484200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt multinode-484200:/home/docker/cp-test_multinode-484200-m02_multinode-484200.txt: (16.6220332s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt": (9.456903s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test_multinode-484200-m02_multinode-484200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test_multinode-484200-m02_multinode-484200.txt": (9.4002049s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt multinode-484200-m03:/home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m02:/home/docker/cp-test.txt multinode-484200-m03:/home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt: (16.6026154s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt"
E0122 12:23:23.973561    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test.txt": (9.4794698s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test_multinode-484200-m02_multinode-484200-m03.txt": (9.4945632s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp testdata\cp-test.txt multinode-484200-m03:/home/docker/cp-test.txt: (9.5521652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt"
E0122 12:23:51.163824    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt": (9.4886072s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3063177298\001\cp-test_multinode-484200-m03.txt: (9.5712598s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt": (9.6461858s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt multinode-484200:/home/docker/cp-test_multinode-484200-m03_multinode-484200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt multinode-484200:/home/docker/cp-test_multinode-484200-m03_multinode-484200.txt: (16.5645836s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt": (9.4714037s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test_multinode-484200-m03_multinode-484200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200 "sudo cat /home/docker/cp-test_multinode-484200-m03_multinode-484200.txt": (9.5015975s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt multinode-484200-m02:/home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 cp multinode-484200-m03:/home/docker/cp-test.txt multinode-484200-m02:/home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt: (16.4913468s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m03 "sudo cat /home/docker/cp-test.txt": (9.4077928s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 ssh -n multinode-484200-m02 "sudo cat /home/docker/cp-test_multinode-484200-m03_multinode-484200-m02.txt": (9.6343035s)
--- PASS: TestMultiNode/serial/CopyFile (363.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (66.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 node stop m03: (14.0841386s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-484200 status: exit status 7 (26.2721008s)

                                                
                                                
-- stdout --
	multinode-484200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-484200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-484200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:25:43.541817    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-484200 status --alsologtostderr: exit status 7 (26.1280397s)

                                                
                                                
-- stdout --
	multinode-484200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-484200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-484200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:26:09.813408    2224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0122 12:26:09.890412    2224 out.go:296] Setting OutFile to fd 760 ...
	I0122 12:26:09.891131    2224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:26:09.891131    2224 out.go:309] Setting ErrFile to fd 1004...
	I0122 12:26:09.891208    2224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0122 12:26:09.904812    2224 out.go:303] Setting JSON to false
	I0122 12:26:09.904812    2224 mustload.go:65] Loading cluster: multinode-484200
	I0122 12:26:09.904812    2224 notify.go:220] Checking for updates...
	I0122 12:26:09.905465    2224 config.go:182] Loaded profile config "multinode-484200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0122 12:26:09.905465    2224 status.go:255] checking status of multinode-484200 ...
	I0122 12:26:09.906461    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:26:12.126829    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:12.126915    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:12.126915    2224 status.go:330] multinode-484200 host status = "Running" (err=<nil>)
	I0122 12:26:12.127056    2224 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:26:12.127901    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:26:14.279051    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:14.279051    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:14.279051    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:26:16.822462    2224 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:26:16.822462    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:16.822462    2224 host.go:66] Checking if "multinode-484200" exists ...
	I0122 12:26:16.835797    2224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 12:26:16.835797    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200 ).state
	I0122 12:26:18.970915    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:18.970915    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:18.970915    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200 ).networkadapters[0]).ipaddresses[0]
	I0122 12:26:21.497262    2224 main.go:141] libmachine: [stdout =====>] : 172.23.177.163
	
	I0122 12:26:21.497524    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:21.497791    2224 sshutil.go:53] new ssh client: &{IP:172.23.177.163 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200\id_rsa Username:docker}
	I0122 12:26:21.601864    2224 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7660466s)
	I0122 12:26:21.616699    2224 ssh_runner.go:195] Run: systemctl --version
	I0122 12:26:21.637437    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:26:21.660839    2224 kubeconfig.go:92] found "multinode-484200" server: "https://172.23.177.163:8443"
	I0122 12:26:21.660839    2224 api_server.go:166] Checking apiserver status ...
	I0122 12:26:21.673313    2224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 12:26:21.709559    2224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2045/cgroup
	I0122 12:26:21.728046    2224 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod94caf781e9d5d6851ed3cfe41e8c6cd0/c38b8024525ee5f30f97a688d317982f46476c7c197f6d021ab875b59cf0e544"
	I0122 12:26:21.741415    2224 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod94caf781e9d5d6851ed3cfe41e8c6cd0/c38b8024525ee5f30f97a688d317982f46476c7c197f6d021ab875b59cf0e544/freezer.state
	I0122 12:26:21.757825    2224 api_server.go:204] freezer state: "THAWED"
	I0122 12:26:21.757825    2224 api_server.go:253] Checking apiserver healthz at https://172.23.177.163:8443/healthz ...
	I0122 12:26:21.765977    2224 api_server.go:279] https://172.23.177.163:8443/healthz returned 200:
	ok
	I0122 12:26:21.767015    2224 status.go:421] multinode-484200 apiserver status = Running (err=<nil>)
	I0122 12:26:21.767015    2224 status.go:257] multinode-484200 status: &{Name:multinode-484200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 12:26:21.767015    2224 status.go:255] checking status of multinode-484200-m02 ...
	I0122 12:26:21.767855    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:26:23.914689    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:23.914857    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:23.914857    2224 status.go:330] multinode-484200-m02 host status = "Running" (err=<nil>)
	I0122 12:26:23.914857    2224 host.go:66] Checking if "multinode-484200-m02" exists ...
	I0122 12:26:23.915814    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:26:26.126593    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:26.126593    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:26.126593    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:26:28.720313    2224 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:26:28.720376    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:28.720425    2224 host.go:66] Checking if "multinode-484200-m02" exists ...
	I0122 12:26:28.734857    2224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 12:26:28.734857    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m02 ).state
	I0122 12:26:30.904452    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0122 12:26:30.904452    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:30.904560    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-484200-m02 ).networkadapters[0]).ipaddresses[0]
	I0122 12:26:33.497853    2224 main.go:141] libmachine: [stdout =====>] : 172.23.185.74
	
	I0122 12:26:33.497853    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:33.498829    2224 sshutil.go:53] new ssh client: &{IP:172.23.185.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-484200-m02\id_rsa Username:docker}
	I0122 12:26:33.617499    2224 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8826202s)
	I0122 12:26:33.631192    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 12:26:33.649615    2224 status.go:257] multinode-484200-m02 status: &{Name:multinode-484200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0122 12:26:33.649615    2224 status.go:255] checking status of multinode-484200-m03 ...
	I0122 12:26:33.651427    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-484200-m03 ).state
	I0122 12:26:35.774212    2224 main.go:141] libmachine: [stdout =====>] : Off
	
	I0122 12:26:35.774442    2224 main.go:141] libmachine: [stderr =====>] : 
	I0122 12:26:35.774442    2224 status.go:330] multinode-484200-m03 host status = "Stopped" (err=<nil>)
	I0122 12:26:35.774526    2224 status.go:343] host is not running, skipping remaining checks
	I0122 12:26:35.774570    2224 status.go:257] multinode-484200-m03 status: &{Name:multinode-484200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (66.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (172.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 node start m03 --alsologtostderr
E0122 12:28:07.209621    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:28:23.970586    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:28:51.173792    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 node start m03 --alsologtostderr: (2m16.3305517s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-484200 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-484200 status: (35.8384774s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (172.36s)

                                                
                                    
x
+
TestScheduledStopWindows (323.5s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-037200 --memory=2048 --driver=hyperv
E0122 12:48:23.976194    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-037200 --memory=2048 --driver=hyperv: (3m10.2310707s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-037200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-037200 --schedule 5m: (10.7569061s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-037200 -n scheduled-stop-037200
E0122 12:48:51.176438    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-037200 -n scheduled-stop-037200: exit status 1 (10.0364202s)

                                                
                                                
** stderr ** 
	W0122 12:48:45.898330   11008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-037200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-037200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.7166306s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-037200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-037200 --schedule 5s: (10.6014479s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-037200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-037200: exit status 7 (2.3730715s)

                                                
                                                
-- stdout --
	scheduled-stop-037200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:50:16.257268   10388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-037200 -n scheduled-stop-037200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-037200 -n scheduled-stop-037200: exit status 7 (2.4444854s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:50:18.631886    4064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-037200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-037200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-037200: (27.332043s)
--- PASS: TestScheduledStopWindows (323.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1043.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.164760411.exe start -p running-upgrade-487400 --memory=2200 --vm-driver=hyperv
E0122 12:53:23.980391    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
E0122 12:53:51.174571    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.164760411.exe start -p running-upgrade-487400 --memory=2200 --vm-driver=hyperv: (8m16.2436299s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-487400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-487400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m8.3791675s)
helpers_test.go:175: Cleaning up "running-upgrade-487400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-487400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-487400: (58.0914387s)
--- PASS: TestRunningBinaryUpgrade (1043.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (1311.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (7m36.3739906s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-634000
E0122 13:03:51.182726    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-634000: (35.0728827s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-634000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-634000 status --format={{.Host}}: exit status 7 (2.5639285s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 13:04:16.116552    5776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m20.7855014s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-634000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (302.2891ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-634000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 13:10:39.655705   12972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-634000
	    minikube start -p kubernetes-upgrade-634000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6340002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-634000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
E0122 13:11:54.431129    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-634000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m41.8691253s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-634000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-634000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-634000: (34.2919633s)
--- PASS: TestKubernetesUpgrade (1311.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-487400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-487400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (328.0199ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-487400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 12:50:48.437586    3124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (648.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.1822966484.exe start -p stopped-upgrade-742600 --memory=2200 --vm-driver=hyperv
E0122 13:08:23.982388    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.1822966484.exe start -p stopped-upgrade-742600 --memory=2200 --vm-driver=hyperv: (4m52.8982095s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.1822966484.exe -p stopped-upgrade-742600 stop
E0122 13:13:23.976630    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.1822966484.exe -p stopped-upgrade-742600 stop: (36.7657503s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-742600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0122 13:13:51.191587    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-671200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-742600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (5m18.6063871s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (648.27s)

                                                
                                    
x
+
TestPause/serial/Start (391.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-910500 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-910500 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (6m31.7426669s)
--- PASS: TestPause/serial/Start (391.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (349.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-910500 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-910500 --alsologtostderr -v=1 --driver=hyperv: (5m49.4661264s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (349.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-742600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-742600: (9.5378238s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.54s)

                                                
                                    
x
+
TestPause/serial/Pause (9.45s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-910500 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-910500 --alsologtostderr -v=5: (9.4522888s)
--- PASS: TestPause/serial/Pause (9.45s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-910500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-910500 --output=json --layout=cluster: exit status 2 (12.5338982s)

                                                
                                                
-- stdout --
	{"Name":"pause-910500","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-910500","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0122 13:22:24.522434   10848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.53s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.05s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-910500 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-910500 --alsologtostderr -v=5: (8.0502041s)
--- PASS: TestPause/serial/Unpause (8.05s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-910500 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-910500 --alsologtostderr -v=5: (8.148849s)
--- PASS: TestPause/serial/PauseAgain (8.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (36.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-910500 --alsologtostderr -v=5
E0122 13:23:23.977783    6396 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-760600\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-910500 --alsologtostderr -v=5: (36.6080433s)
--- PASS: TestPause/serial/DeletePaused (36.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (7.94s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.9414072s)
--- PASS: TestPause/serial/VerifyDeletedResources (7.94s)

                                                
                                    

Test skip (22/156)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard